818 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 2, APRIL 2014
Continuous Head Movement Estimator for
Driver Assistance: Issues, Algorithms,
and On-Road Evaluations
Ashish Tawari, Student Member, IEEE, Sujitha Martin, Student Member, IEEE, and
Mohan Manubhai Trivedi, Fellow, IEEE
Abstract—Analysis of a driver’s head behavior is an integral
part of a driver monitoring system. In particular, the head pose
and dynamics are strong indicators of a driver’s focus of attention.
Many existing state-of-the-art head dynamic analyzers are, how-
ever, limited to single-camera perspectives, which are susceptible
to occlusion of facial features from spatially large head move-
ments away from the frontal pose. Nonfrontal glances away from
the road ahead, however, are of special interest since interesting
events, which are critical to driver safety, occur during those times.
In this paper, we present a distributed camera framework for
head movement analysis, with emphasis on the ability to robustly
and continuously operate even during large head movements.
The proposed system tracks facial features and analyzes their
geometric configuration to estimate the head pose using a 3-D
model. We present two such solutions that additionally exploit the
constraints that are present in a driving context and video data to
improve tracking accuracy and computation time. Furthermore,
we conduct a thorough comparative study with different camera
configurations. For experimental evaluations, we collected a novel
head pose data set from naturalistic on-road driving in urban
streets and freeways, with particular emphasis on events inducing
spatially large head movements (e.g., merge and lane change). Our
analyses show promising results.
Index Terms—Accident prevention, active safety, distraction,
driver attention, driver behavior, driver gaze/glance, driver head
dynamics, naturalistic driving, situational awareness.
I. I NTRODUCTION
I
N 2012 alone, there were 5.6 million police-reported motor
vehicle crashes in the U.S., with over 33 000 fatalities,
which is a 3.3% increase from the previous year [1]. Driver
distraction (e.g., phone usage, talking, and eating) and inat-
tention (drowsiness, fatigue, etc.) are some of the prominent
causes of the crashes. A comprehensive survey on automotive
collisions, however, demonstrated that a driver was 31% less
likely to cause an injury-related collision when he had one or
more passengers who could alert him to unseen hazards [2].
Consequently, there is great potential for intelligent driver
assistance systems (IDASs) that are human centric [3]–[6] to
alert the driver of potential dangers or even briefly guide them
Manuscript received July 30, 2013; accepted October 22, 2013. Date of pub-
lication February 20, 2014; date of current version March 28, 2014. This work
was supported by the University of California Discovery Grant Program and
industry partners, particularly Audi AG and Volkswagen Electronics Research
Laboratory. The Associate Editor for this paper was S. S. Nedevschi.
The authors are with the Laboratory for Intelligent and Safe Automobiles,
University of California, San Diego, La Jolla, CA 92093 USA.
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TITS.2014.2300870
through a critical situation. Monitoring driver behavior is hence
becoming an increasingly important component of IDASs.
Driver head and eye dynamic behaviors are of particular
interest, as they have the potential to derive where or at what
the driver is looking. Traditionally, the eye gaze and movement
are considered good measures to identify an individual’s focus
of attention. Vision-based systems are commonly used for gaze
tracking as they provide a noncontact and noninvasive solution.
However, such systems are highly susceptible to illumination
changes, particularly in real-world driving scenarios. Eye-gaze
tracking methods using corneal reflection with infrared illumi-
nation have been primarily used indoors [7] but are vulnerable
to sunlight. The robustness requirement of IDASs has suggested
the use of head dynamics. Although a precise gaze direc-
tion provides useful information, the head pose and dynamics
provide a course gaze direction, which is often sufficient in
a number of applications [8], [9]. Recent studies have used
head motion, along with lane position and vehicle dynamics,
to predict a driver’s intent to turn [10] and change lanes [11].
In fact, head motion cues, when compared with eye-gaze cues,
were shown to better predict lane change intent 3 s ahead of
the intended event [12]. A significant amount of research has
gone toward fatigue and attention monitoring using driver head
dynamics [13], [14]. In a more recent study, head dynamics has
been used to estimate a driver’s awareness of traffic objects by
learning which objects attract the driver’s gaze depending on
the situation [15].
Automatic head dynamics analysis remains a challenging
vision problem. Not only should a head movement analyzer
be robust to ever-changing driving situations but it also needs
to be continuously functional in a nonselective manner to gain
a driver’s trust. Specifically, such a system should have the
following capabilities.
• Automatic: There should be no manual initialization, and
the system should operate without any human interven-
tion. This criterion precludes the use of pure tracking
approaches that measure the head pose relative to some
initial configuration.
• Fast: The system must be able to estimate the head pose
while driving, with real-time operation.
• Wide operational range: The system should be able to
accurately and robustly handle spatially large and varying
speeds of head movements.
• Lighting invariant: The system must work in varying
lighting conditions (e.g., sunny and cloudy).
1524-9050 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.