Phonological Mismatch Makes Aided Speech
Recognition in Noise Cognitively Taxing
Mary Rudner,
1,2
Catharina Foo,
2
Jerker Ro ¨ nnberg,
1,2
and Thomas Lunner
1,3,4
Objectives: The working memory framework for
Ease of Language Understanding predicts that
speech processing becomes more effortful, thus re-
quiring more explicit cognitive resources, when
there is mismatch between speech input and pho-
nological representations in long-term memory. To
test this prediction, we changed the compression
release settings in the hearing instruments of expe-
rienced users and allowed them to train for 9 weeks
with the new settings. After training, aided speech
recognition in noise was tested with both the
trained settings and orthogonal settings. We postu-
lated that training would lead to acclimatization to
the trained setting, which in turn would involve
establishment of new phonological representations
in long-term memory. Further, we postulated that
after training, testing with orthogonal settings
would give rise to phonological mismatch, associ-
ated with more explicit cognitive processing.
Design: Thirty-two participants (mean 70.3 years,
SD 7.7) with bilateral sensorineural hearing loss
(pure-tone average 46.0 dB HL, SD 6.5), bilater-
ally fitted for more than 1 year with digital, two-
channel, nonlinear signal processing hearing in-
struments and chosen from the patient population
at the Linko ¨ ping University Hospital were ran-
domly assigned to 9 weeks training with new, fast
(40 ms) or slow (640 ms), compression release set-
tings in both channels. Aided speech recognition in
noise performance was tested according to a design
with three within-group factors: test occasion (T1,
T2), test setting (fast, slow), and type of noise (un-
modulated, modulated) and one between-group fac-
tor: experience setting (fast, slow) for two types of
speech materials—the highly constrained Hager-
man sentences and the less-predictable Hearing in
Noise Test (HINT). Complex cognitive capacity was
measured using the reading span and letter moni-
toring tests.
Prediction: We predicted that speech recognition in
noise at T2 with mismatched experience and test
settings would be associated with more explicit cog-
nitive processing and thus stronger correlations with
complex cognitive measures, as well as poorer perfor-
mance if complex cognitive capacity was exceeded.
Results: Under mismatch conditions, stronger cor-
relations were found between performance on
speech recognition with the Hagerman sentences
and reading span, along with poorer speech recog-
nition for participants with low reading span
scores. No consistent mismatch effect was found
with HINT.
Conclusions: The mismatch prediction generated by
the working memory framework for Ease of Lan-
guage Understanding is supported for speech rec-
ognition in noise with the highly constrained Hag-
erman sentences but not the less-predictable HINT.
(Ear & Hearing 2007;28;879– 892)
The working memory framework for Ease of Lan-
guage Understanding (ELU) (Ro ¨nnberg 2003a) pre-
dicts that speech understanding requires more mental
effort, or cognitive resources, when speech input can-
not be easily matched to memory representations.
This is referred to as mismatch. We postulate that if
the signal processing parameters in the hearing in-
strument of an experienced user are changed, a mis-
match situation will arise, because the acoustic signal
delivered by the hearing instrument will no longer
match established memory representations. To test
the mismatch hypothesis, we investigated aided
speech recognition in noise performance under match
and mismatch conditions and its relationship with
cognitive processing.
Speech Understanding and Explicit Cognitive
Processing
In an optimum listening situation, the speech
signal is processed effortlessly and automatically.
This means that the cognitive processing involved is
largely unconscious and implicit. However, listening
situations are often suboptimum, which means that
implicit cognitive processes are insufficient to un-
lock the meaning in the speech stream. Resolving
ambiguities among previous speech elements and
constructing expectations of prospective exchanges
in the dialogue are examples of the complex pro-
cesses that may arise. These processes are effortful
and conscious and thus involve explicit cognitive
processing.
The listening situation may be suboptimum be-
cause cognitive resources are engaged elsewhere,
1
The Swedish Institute for Disability Research,
2
Departments of
Behavioural Sciences and Learning, and
3
Technical Audiology,
Linko ¨ ping University, Sweden; and
4
Oticon A/S, Research Centre
Eriksholm, Snekkersten, Denmark.
0196/0202/07/2806-0879/0 • Ear & Hearing • Copyright © 2007 by Lippincott Williams & Wilkins • Printed in the U.S.A.
879