SIViP (2012) 6:159–169
DOI 10.1007/s11760-010-0177-5
ORIGINAL PAPER
Automatic facial expression recognition: feature extraction
and selection
Seyed Mehdi Lajevardi · Zahir M. Hussain
Received: 15 April 2009 / Revised: 4 August 2010 / Accepted: 4 August 2010 / Published online: 24 August 2010
© Springer-Verlag London Limited 2010
Abstract In this paper, we investigate feature extraction and
feature selection methods as well as classification methods
for automatic facial expression recognition (FER) system.
The FER system is fully automatic and consists of the follow-
ing modules: face detection, facial detection, feature extrac-
tion, selection of optimal features, and classification. Face
detection is based on AdaBoost algorithm and is followed by
the extraction of frame with the maximum intensity of emo-
tion using the inter-frame mutual information criterion. The
selected frames are then processed to generate characteristic
features using different methods including: Gabor filters, log
Gabor filter, local binary pattern (LBP) operator, higher-order
local autocorrelation (HLAC) and a recent proposed method
called HLAC-like features (HLACLF). The most informative
features are selected based on both wrapper and filter feature
selection methods. Experiments on several facial expression
databases show comparisons of different methods.
Keywords Facial expression recognition · Emotion
recognition · Mutual information · Higher order auto
correlation · Gabor filters
1 Introduction
Facial expression is a visible manifestation of the affective
state, cognitive activity, intention, personality, and psycho-
pathology of a person; it not only expresses our emotions but
also provides important communicative cues during social
S. M. Lajevardi (B ) · Z. M. Hussain
School of Electrical and Computer Engineering,
RMIT University, Melbourne, VIC, Australia
e-mail: smlajevardi@ieee.org
Z. M. Hussain
e-mail: zmhussain@ieee.org
interaction. Reported by psychologists [16], facial expres-
sion constitutes 55% of the effect of a communicated message
while language and voice constitute 7 and 38%, respectively.
So it is obvious that automatic recognition of facial expres-
sion can improve human-computer interaction (HCI) or even
social interaction. Facial expression recognition (FER) can
be useful in many areas, for research and application. Study-
ing how humans recognize emotions and use them to commu-
nicate information are important topic in anthropology. The
emotion automatically estimated by a computer is considered
to be more objective than those labelled by people and it can
be used in clinical psychology, psychiatry, and neurology.
Furthermore, expression recognition can be embedded into a
face recognition system to improve its robustness. In a real-
time face recognition system where a series of images of an
individual are captured, FER module picks the one which is
most similar to a neutral expression for recognition, because
normally a face recognition system is trained using neutral
expression images. In the case where only one image is avail-
able, the estimated expression can be used to either decide
which classifier to choose or to add some kind of compensa-
tion. In a Human Computer Interface (HCI), expression is a
great potential input. This is especially true in voice-activated
control systems. This implies a FER module can markedly
improve the performance of such systems. Customer’s facial
expressions can also be collected by service providers as
implicit user feedback to improve their service. Compared to
a conventional questionnaire-based method, this should be
more reliable and furthermore, has virtually no cost.
An automatic classification of facial expressions consists
of two stages: feature extraction and feature classification.
The feature extraction is a key importance to the whole clas-
sification process. If inadequate features are used, even the
best classifier could fail to achieve accurate recognition. The
two most common approaches to the facial feature extraction
123