Overlooking: The nature of gaze behavior and anomaly
detection in expert dentists
Nora Castner
Perception Engineering, University of
Tübingen
Tübingen, Germany
castnern@informatik.uni-tuebingen.
de
Solveig Klepper
Computer Science Institute,
University of Tübingen
Tübingen, Germany
solveig.klepper@student.
uni-tuebingen.de
Lena Kopnarski
Computer Science Institute,
University of Tübingen
Tübingen, Germany
lena.kopnarski@student.
uni-tuebingen.de
Fabian Hüttig
∗
University Hospital Tübingen
Tübingen, Germany
fabian.huettig@med.uni-tuebingen.
de
Constanze Keutel
2
University Hospital Tübingen
Tübingen, Germany
constanze.keutel@med.
uni-tuebingen.de
Katharina Scheiter
Leibniz-Institut für Wissensmedien
Tübingen, Germany
k.scheiter@iwm-tuebingen.de
Juliane Richter
Leibniz-Institut für Wissensmedien
Tübingen, Germany
j.richter@iwm-tuebingen.de
Thérése Eder
Leibniz-Institut für Wissensmedien
Tübingen, Germany
tf.eder@iwm-tuebingen.de
Enkelejda Kasneci
Perception Engineering, University of
Tübingen
Tübingen, Germany
enkelejda.kasneci@uni-tuebingen.de
ABSTRACT
The cognitive processes that underly expert decision making in
medical image interpretation are crucial to the understanding of
what constitutes optimal performance. Often, if an anomaly goes
undetected, the exact nature of the false negative is not fully under-
stood. This work looks at 24 experts’ performance (true positives
and false negatives) during an anomaly detection task for 13 images
and the corresponding gaze behavior. By using a drawing and an
eye-tracking experimental paradigm, we compared expert target
anomaly detection in orthopantomographs (OPTs) against their
own gaze behavior. We found there was a relationship between the
number of anomalies detected and the anomalies looked at. How-
ever, roughly 70% of anomalies that were not explicitly marked in
the drawing paradigm were looked at. Therefore, we looked how
often an anomaly was glanced at. We found that when not explicitly
marked, target anomalies were more often glanced at once or twice.
In contrast, when targets were marked, the number of glances was
higher. Furthermore, since this behavior was not similar over all
images, we attribute these diferences to image complexity.
∗
Department of Prosthodontics
2
Department of Radiology, Center of Dentistry, Oral Medicine and Maxillofacial
Surgery
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for proft or commercial advantage and that copies bear this notice and the full citation
on the frst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specifc permission and/or a
fee. Request permissions from permissions@acm.org.
MCPMD’18 , October 16, 2018, Boulder, CO, USA
© 2018 Association for Computing Machinery.
ACM ISBN 978-1-4503-6072-2/18/10. . . $15.00
https://doi.org/10.1145/3279810.3279845
CCS CONCEPTS
· Applied computing → Psychology; Education; · Human-
centered computing → Interactive systems and tools; Visualiza-
tion design and evaluation methods;
KEYWORDS
Remote Eye Tracking, Medical image interpretation, Cognitive Mod-
elling, Expertise
ACM Reference Format:
Nora Castner, Solveig Klepper, Lena Kopnarski, Fabian Hüttig, Constanze
Keutel, Katharina Scheiter, Juliane Richter, Thérése Eder, and Enkelejda
Kasneci. 2018. Overlooking: The nature of gaze behavior and anomaly
detection in expert dentists. In Workshop on Modeling Cognitive Processes
from Multimodal Data (MCPMD’18 ), October 16, 2018, Boulder, CO, USA.
ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3279810.3279845
1 INTRODUCTION
Expertise in any domain is what many strive for. It is known that
these skills are established through practice. Yet, there are still
mechanisms that are not fully understood. Mainly, how experts
process their visual input such that their domain knowledge is
efectively applied.
In general, experts are not easily available due to time and work
constraints. Therefore, the majority of the literature measures ex-
pertise with small samples of experts. Such small caches can lead to
an insufcient understanding of expertise. In the literature review
from Gegenfurtner et al., [4], across all expertise domains evaluated,
mean expert sample sizes ranged from six to 17 experts; with the
medical profession having approximately eight experts. More re-
cently, van der Gijp et al. [10] provided a similar review that focused
solely on radiology. Of the 26 studies evaluated in the meta-analysis,