A CASE OF MULTIMODAL APROSODIA: IMPAIRED AUDITORY AND VISUAL SPEECH PROSODY PERCEPTION IN A PATIENT WITH RIGHT HEMISPHERE DAMAGE Karen Nicholson 12 , Shari Baum 3 , Lola Cuddy 1 and Kevin Munhall 12 Departments of 1 Psychology and 2 Otolaryngology, Queen’s University, Kingston, Ontario, Canada; 3 School of Communication Sciences & Disorders, McGill University, Montreal, Quebec, Canada ABSTRACT A single-case study was carried out on a patient (KB), who presented with “aprosodia” following a right hemisphere stroke, to explore the cross-modal integration of auditory and visual cues in prosodic speech perception. KB was tested on two prosodic speech perception tasks: sentence intonation categorization (i.e., statement or question) and emphatic stress categorization (i.e., first or second noun was stressed). In addition, he was tested on two segmental speech perception tasks: McGurk Task and speech-in-noise. In all tasks, there were three presentation conditions: audio-only, visual- only, and audiovisual. Results showed that KB performed at about chance on both prosody perception tasks in all three presentation-conditions. In contrast, he performed near ceiling in the visual- only and audiovisual conditions on both tasks of segmental speech perception. His performance on the speech-in-noise task showed that he was able to use visual information to compensate for impoverished auditory information in segmental speech perception. Also, his results on the McGurk task were indicative of cross-modal integration in segmental speech perception. The results suggest that, although KB’s ability to process visual information in segmental speech tasks is intact, he is nonetheless unable to process prosodic speech information in either the auditory or visual modality. 1. INTRODUCTION Speech perception is not only an auditory process but also can involve the use of visual information. Speech intelligibility in a noisy environment increases if the listener can see the talker's face [1]. Similarly, individuals who have hearing impairments can augment their residual hearing with speechreading or even substitute vision for audition [e.g., 2]. A well-known demonstration of the audiovisual nature of speech perception is the McGurk effect [3]. In this illusion, the perception of perfectly audible consonants (e.g., /b/) is modified by the simultaneous visual articulation of another consonant (e.g., /g/). For each of these phenomena, auditory information and visual information are combined to yield a unitary percept different from both stimuli presented (e.g., /d/). Although the integration of auditory information and visual information in segmental speech perception has been examined extensively, there has been no systematic investigation of audiovisual integration in prosodic speech perception. The results of several studies, however, indicate that participants perform better-than-chance at recognizing prosodic aspects of speech, such as emphatic stress (which word is stressed) and sentence intonation (question or statement), from visual cues alone [e.g., 4]. It has been argued that visual cues, such as movements of the head and eyebrows, may correlate with changes in voice pitch, loudness, or duration, thus providing a visual cue for prosodic speech perception [e.g., 5]. It may be that, as in segmental speech perception, these visual cues integrate with auditory information during prosodic speech perception. The purpose of the present study was to examine whether the presence of visual prosody cues could improve speech prosody perception in an “amusic/aprosodic” patient (KB). KB was first identified due to an interesting dissociation in his ability to recognize familiar melodies [6]. Although he was impaired at recognizing instrumental melodies (i.e., not associated with lyrics), his ability to recognize song melodies without their accompanying lyrics was intact. This dissociation could not be accounted for by differences in the acoustical or musical features of the different melodies. It was argued that KB’s processing of melodic information was sufficient to activate a representation of associative information and the song lyrics, enabling him to recognize the melody. Speech prosody, like music, has a structured pattern of pitch, duration, and intensity and it has been suggested that there may be some overlap in the processing of prosody and music [e.g., 7]. It was later confirmed that KB was also impaired on several tasks of prosody perception [unpublished data]. Although he was impaired on prosody perception tasks, it may be that auditory prosody information was processed to some extent. We wanted to examine whether the presence of visual cues could influence KB’s perception of auditory prosody information, demonstrating audiovisual integration in prosodic speech perception. To do this, we tested KB on two prosody perception tasks AVSP 2001 International Conference on Auditory-Visual Speech Processing 62