Research Report
Vocal emotion processing in Parkinson's disease:
Reduced sensitivity to negative emotions
Chinar Dara, Laura Monetta, Marc D. Pell
⁎
School of Communication Sciences and Disorders, McGill University, Montréal, Qc, Canada
ARTICLE INFO ABSTRACT
Article history:
Accepted 16 October 2007
Available online 22 October 2007
To document the impact of Parkinson's disease (PD) on communication and to further
clarify the role of the basal ganglia in the processing of emotional speech prosody, this
investigation compared how PD patients identify basic emotions from prosody and judge
specific affective properties of the same vocal stimuli, such as valence or intensity. Sixteen
non-demented adults with PD and 17 healthy control (HC) participants listened to
semantically-anomalous pseudo-utterances spoken in seven emotional intonations
(anger, disgust, fear, sadness, happiness, pleasant surprise, neutral) and two distinct
levels of perceived emotional intensity (high, low). On three separate occasions, participants
classified the emotional meaning of the prosody for each utterance (identification task),
rated how positive or negative the stimulus sounded (valence rating task), or rated how
intense the emotion was expressed by the speaker (intensity rating task). Results indicated
that the PD group was significantly impaired relative to the HC group for categorizing
emotional prosody and showed a reduced sensitivity to valence, but not intensity, attributes
of emotional expressions conveying anger, disgust, and fear. The findings are discussed in
light of the possible role of the basal ganglia in the processing of discrete emotions,
particularly those associated with negative vigilance, and of how PD may impact on the
sequential processing of prosodic expressions.
© 2007 Elsevier B.V. All rights reserved.
Keywords:
Prosody
Basal ganglia
Affect
Pragmatic
Speech perception
1. Introduction
In speech communication, listeners attend to relative
changes in pitch, duration, and loudness, or speech prosody,
to infer the emotions or affective state of a speaker (Banse
and Scherer, 1996; Scherer, 1986). Recent interest in the
neurocognitive processing of emotions from a speaker's voice
indicates that these abilities are governed by a distributed
neural network involving cortical and subcortical structures
(Pell, 2006; Schirmer and Kotz, 2006). For example, many
reports have sought to elaborate the role of cortical regions,
such as right temporal and bilateral prefrontal areas, at
different stages of processing emotional prosody (Beaucousin
et al., 2007; Wildgruber et al., 2005a,b). Recent studies have
also drawn attention to the involvement of subcortical
structures in vocal emotion processing, such as the amygdala
(Sander et al., 2005; Scott et al., 1997) and especially the basal
BRAIN RESEARCH 1188 (2008) 100 – 111
⁎ Corresponding author. McGill University, Faculty of Medicine, School of Communication Sciences and Disorders, 1266, avenue des Pins
Ouest, Montreal, Qc, Canada H3G 1A8. Fax: +1 514 398 8123.
E-mail address: marc.pell@mcgill.ca (M.D. Pell).
URL: http://www.mcgill.ca/pell_lab/ (M.D. Pell).
0006-8993/$ – see front matter © 2007 Elsevier B.V. All rights reserved.
doi:10.1016/j.brainres.2007.10.034
available at www.sciencedirect.com
www.elsevier.com/locate/brainres