Tracking the Decision-Making Process in Multiple-Choice Assessment: Evidence
from Eye Movements
MARLIT ANNALENA LINDNER
1
*, ALEXANDER EITEL
2
, GUN-BRIT THOMA
1
,
INGER MARIE DALEHEFTE
1
, JAN MARTEN IHME
1
and OLAF KÖLLER
1
1
Leibniz Institute for Science and Mathematics Education, Kiel, Germany
2
Knowledge Media Research Center, Tübingen, Germany
Summary: This study investigated students’ decision-making processes in a knowledge-assessing multiple-choice (MC) test using
eye-tracking methodology. More precisely, the gaze bias effect (more attention to more preferred options) and its relation to
domain knowledge were the focus of the study. Eye movements of students with high (HPK) and low (LPK) prior domain knowledge
were recorded while they solved 21 MC items. Afterwards, students rated every answer option according to their subjective preference.
As expected, both HPK and LPK students showed a gaze bias towards subjectively preferred answer options, whereby HPK students
spent more time on objectively correct answers. Furthermore, a fine-grained time-course analysis showed similar patterns of attention
distribution over time for both HPK and LPK students, when focusing on subjective preference levels. Thus, these data offer a new
perspective on knowledge-related MC item solving and provide evidence for the generalizability of the gaze bias effect across decision
tasks. Copyright © 2014 John Wiley & Sons, Ltd.
Multiple-choice (MC) questions are acknowledged as having
remarkably positive characteristics in educational assessment
due to their ease of use in practical application, especially in
terms of standardization and item scoring (Haladyna, 2004).
Therefore, MC items are frequently used to assess knowledge
in everyday educational settings as well as in prominent
large-scale studies such as the Programme for International
Student Assessment (PISA; OECD, 2013). Due to the
political power of such educational studies and the use of
high-stakes tests as an admission restriction measure in many
educational systems, developing high-quality assessments is
crucial and needs to be accompanied by solid research.
Accordingly, constructional aspects of item writing and
psychometrical issues have received much attention in the last
decades, whereas only few studies have applied a cognitive
perspective to students’ demands and processing when they
solve MC items or related assessments. Such knowledge, how-
ever, could be particularly useful in future research on item
characteristics and their interaction with students’ characteris-
tics (Embretson, 1999; Haladyna, Downing, & Rodriguez,
2002; Leighton, 2004) to possibly increase test fairness and
the validity of assessments. This is a central goal in the field
of cognitive diagnostic assessment (CDA).
By using insights and methodology (i.e. eye-tracking)
from cognitive psychology to explore students’ processing
of MC items in educational settings, the present study was
conducted to possibly support future efforts in CDA. In
particular, against the backdrop of theory and research on
knowledge-related cognitive processing (e.g. Canham &
Hegarty, 2010; Sweller, Van Merriënboer, & Paas, 1998)
and decision making (e.g. Glaholt, Wu, & Reingold, 2009;
Shimojo, Simion, Shimojo, & Scheier, 2003), in the present
study, we derived three hypotheses about students’ process-
ing and solving of MC items. To test the hypotheses, we
conducted fine-grained quantitative analyses of high prior
knowledge (HPK) and low prior knowledge (LPK) students’
eye movements during the item-solving process. The results
may contribute to a better understanding of how students
with different levels of domain knowledge process MC test
information. Furthermore, they provide tentative evidence
for the potential of using eye-tracking data to assess students’
domain knowledge levels and preferences for answer options,
indicating that eye-tracking data could be used as diagnostic
information in future educational practice.
The potential of eye-tracking in cognitive diagnostic
assessment
Cognitive diagnostic assessment means combining various
methods and theoretical approaches (e.g. from cognitive
psychology) to develop and improve tests. For instance,
students’ processing data (e.g. verbal protocols) can be
consulted to examine and account for the construct and inter-
nal validity of tests or to allow for a more valid diagnosis of
students’ abilities and needs in future instruction (Embretson
& Gorin, 2001; Leighton, 2004; Messick, 1989; Nichols,
1994). Another goal of CDA is to develop and test cognitive
models to explain the item-solving process and, thus, to
provide solid groundwork for more theory-driven test
construction and, consequently, higher test quality in the
future (e.g. Leighton & Gierl, 2007). Thus, taken together,
the main intention of CDA is to learn more about the cogni-
tive requirements of test items in educational assessment by
making use of statistical models, cognitive theories and
methods to gain insights into the item-solving process
(Healy, 2005; Leighton & Gierl, 2007; Nichols, Chipman,
& Brennan, 1995; Pellegrino, Chudowsky, & Glaser, 2001).
A sophisticated method to obtain such processing data in
testing situations is the use of eye-tracking technology (for
an introduction, see Duchowski, 2007; Holmqvist, Nyström
et al., 2011). This method is perfectly suited (not only) for
the context of MC assessment because it combines several
advantages that allow high-quality processing data to be
acquired: First, compared with traditional process tracing
methods (e.g. think aloud protocols), eye-tracking allows
students’ attention distribution to be recorded while they
*Correspondence to: Marlit Annalena Lindner, Leibniz Institute for Science
and Mathematics Education, Olshausenstraße 62, 24118 Kiel, Germany.
E-mail: mlindner@ipn.uni-kiel.de
Copyright © 2014 John Wiley & Sons, Ltd.
Applied Cognitive Psychology, Appl. Cognit. Psychol. 28: 738–752 (2014)
Published online 7 August 2014 in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/acp.3060