Simultaneous Recognition of Words and Prosody in the Boston University Radio Speech Corpus Mark Hasegawa-Johnson, Ken Chen, Jennifer Cole, Sarah Borys, Sung-Suk Kim, Aaron Cohen, Tong Zhang, Jeung-Yoon Choi, Heejin Kim, Taejin Yoon, and Sandra Chavarria Beckman Institute, University of Illinois at Urbana-Champaign, USA Abstract This paper describes automatic speech recognition systems that satisfy two techno- logical objectives. First, we seek to improve the automatic labeling of prosody, in order to aid future research in automatic speech understanding. Second, we seek to apply statistical speech recognition models of prosody for the purpose of reducing the word error rate of an automatic speech recognizer. The systems described in this paper are variants of a core dynamic Bayesian network model, in which the key hid- den variables are the word, the prosodic tag sequence, and the prosody-dependent allophones. Statistical models of the interaction among words and prosodic tags are trained using the Boston University Radio Speech Corpus, a database anno- tated using the tones and break indices (ToBI) prosodic annotation system. This paper presents both theoretical and empirical results in support of the conclusion that a prosody-dependent speech recognizer—a recognizer that simultaneously com- putes the most-probable word labels and prosodic tags—can provide lower word Preprint submitted to Elsevier Science 16 December 2004