COMMENT
The Falsifiability of Actual Decision-Making Models
Andrew Heathcote
University of Newcastle
E.-J. Wagenmakers
University of Amsterdam
Scott D. Brown
University of Newcastle
Jones and Dzhafarov (2014) provided a useful service in pointing out that some assumptions of modern
decision-making models require additional scrutiny. Their main result, however, is not surprising: If an
infinitely complex model was created by assigning its parameters arbitrarily flexible distributions, this
new model would be able to fit any observed data perfectly. Such a hypothetical model would be
unfalsifiable. This is exactly why such models have never been proposed in over half a century of model
development in decision making. Additionally, the main conclusion drawn from this result—that the
success of existing decision-making models can be attributed to assumptions about parameter distribu-
tions—is wrong.
Keywords: choice reaction time, diffusion model, linear ballistic accumulator, model falsifiability
Supplemental materials: http://dx.doi.org/10.1037/a0037771.supp
Modern decision-making models have been used to uncover
new insights about brain and behavior in dozens of different
paradigms requiring choice among two (e.g., Ratcliff, & McKoon,
2008) or more (e.g., Busemeyer & Diederich, 2002) options. All
modern models share a common and simple structure: They as-
sume that evidence is gradually accumulated from the environment
and a decision is made whenever the evidence reaches a threshold
amount (e.g., the diffusion model, Ratcliff 1978; Ratcliff & Tuer-
linckx, 2002; and the linear ballistic accumulator model [LBA],
Brown & Heathcote, 2008). In their simplest forms, the models
have three central parameters: the drift rate, which measures how
fast evidence accumulates; a threshold, which measures how much
evidence needs to accumulate before a decision is made; and
nondecision time, which measures how much time is taken up by
processes other than decision making, such as the time taken to
push a response button.
Over the past 50 years (since Stone, 1960), the most basic
versions of these models have been proven incomplete. For exam-
ple, the earliest version of the model, described above, successfully
predicted the general shape of response time distributions, the
trade-off between urgent versus cautious decisions, and even some
fine details of response time distributions such as hazard rates.
However, these early versions made such highly constrained pre-
dictions that they were unable to accommodate patterns of differ-
ing speed between incorrect and correct responses, which were
regularly observed in data when participants were told to respond
quickly (e.g., Ratcliff & Rouder, 1998). These limitations have
informed model development, and modern response time models
include two key elements that address these earlier limitations:
They assume that the drift rate varies randomly from decision to
decision and that the starting point of the evidence accumulation
process varies randomly from decision to decision. The distribu-
tions assumed for the trial-to-trial variability of the drift rate and
start point have always been simple forms with one additional free
parameter. The interested reader will find a detailed history of the
development of response time models and the implications for
model constraint and falsifiability in the supplemental materials to
this comment.
1
Jones and Dzhafarov’s (2014) Central Result:
Infinitely Complex Models Can Be Unfalsifiable
Jones and Dzhafarov’s (2014) main result extends earlier work
by Townsend (1976), Marley and Colonius (1992), and Dzhafarov
(1993). The key idea is that if one allows unbounded complexity
and freedom in the across-trial distribution of drift rates, the model
1
The supplemental materials address in detail specific claims about (a)
a lack of empirical support for the LBA and diffusion models, (b) the
flexibility and testing of the LBA and diffusion models, (c) positions held
by authors of evidence accumulation models about the status of different
assumptions made by their models, and (d) the supposed special status of
distributional assumptions over other assumptions.
Andrew Heathcote, School of Psychology, University of Newcastle;
E.-J. Wagenmakers, Department of Psychology, University of Amsterdam;
Scott D. Brown, School of Psychology, University of Newcastle.
Correspondence concerning this article should be addressed to An-
drew Heathcote, School of Psychology, University of Newcastle, Aus-
tralia, Callaghan, 2308, New South Wales, Australia. E-mail:
andrew.heathcote@newcastle.edu.au
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Psychological Review © 2014 American Psychological Association
2014, Vol. 121, No. 4, 676 – 678 0033-295X/14/$12.00 http://dx.doi.org/10.1037/a0037771
676