Testing Assumptions in Computational Theories of Aphasia
Wheeler Ruml, Alfonso Caramazza, Jennifer R. Shelton, and Doriana Chialant
Harvard University
We present the performances of 13 aphasic patients on a picture-naming task and attempt to model
these data using computer simulations. We systematically manipulate the assumptions underlying
several interactive, two-step, spreading-activation models, including the proposals of Dell et al.
(1997), Foygel and Dell (2000), and Rapp and Goldrick (in press). Using a numerical regres-
sion procedure and multiple views of each model’s possible output, we find that peripheral
pragmatic assumptions play a role equal to that of theoretically more central model components.
None of the models we consider can account for all of the patients, leading us to conclude that one
or more of the assumptions underlying each model is flawed. We argue that there are strong
limitations on the conclusions that can legitimately be drawn from such simulation studies but that
close analysis of individual patients can allow sound testing of potentially more accurate
models. © 2000 Academic Press
Key Words: computational modeling; aphasia; lexical access; computational neuropsychology.
The promise of computational models of hu-
man language processing is widely recognized.
Not only does the act of constructing a simula-
tion force one to specify one’s theory precisely,
but the resulting model can be quantitatively
tested against empirical data. Furthermore, the
ease of simulation allows one to experiment
with models that deviate from normal behavior
and thereby to form theories about the interac-
tions between brain damage and language pro-
cessing. Data from aphasic patients can then be
used to test the adequacy of the combined
model of normal processing and damage in
aphasia. One could even imagine using simula-
tion results to provide insight into the break-
down occurring in specific patients. Examples
of recent computational investigations of low-
level language processing include the word-
reading models of Plaut et al. (1996) and Shal-
lice et al. (1995) and the word production
models of Levelt et al. (1999) and Dell et al.
(1997).
In practice, a computational model is often
constructed with the aim of testing claims about
one or two specific theoretical issues, such as
the role of interaction between levels of repre-
sentation during word production. But the col-
lection of theoretical principles that one wishes
to put to empirical test does not usually describe
a complete mechanism suitable for simulation.
Details beyond the scope of the theory at stake
must be filled in, such as the exact semantic
relations between words in the model. And de-
tails supposedly within the purview of the the-
ory must often be left out for the sake of reduc-
ing computation time, such as the full inventory
of a typical human lexicon. In this paper, we
systematically examine the role played by these
seemingly minor assumptions by evaluating
three closely related models of word produc-
tion. We present data from thirteen aphasic pa-
tients on a picture-naming task and attempt to
account for their performance using each of the
We thank Gary Dell, Randi Martin, and two anonymous
reviewers for their many helpful comments, Nadine Martin
for help in scoring some patient responses, Brenda Rapp and
Matthew Goldrick for providing detailed information re-
garding their model’s lexicon, Angelos Kottas for running
some preliminary experiments, and Michele Miozzo and the
Harvard Cognitive Neuropsychology Laboratory for many
stimulating discussions regarding this research. Support was
provided in part by the National Science Foundation under
grants CDA-94-01024 and IRI-9618848, and by the Na-
tional Institutes of Health under grant NS-22201.
Please address correspondence concerning this article to
Wheeler Ruml, Maxwell Dworkin Laboratory, Harvard
University, 33 Oxford Street, Cambridge, MA 02138.
E-mail: ruml@eecs.harvard.edu.
217
0749-596X/00 $35.00
Copyright © 2000 by Academic Press
All rights of reproduction in any form reserved.
Journal of Memory and Language 43, 217–248 (2000)
doi:10.1006/jmla.2000.2730, available online at http://www.idealibrary.com on