Abstract—Human emotions are deeply intertwined with
cognition. Emotions direct cognitive processes and processing
strategies of humans. The goal of this work is to design a model
with the capability of classifying the uncertainty, contradiction
and the cognitive nature of the emotions. For achieving this,
3D cognitive model is designed. This model enhances our vision
of classification of emotions produced by reinforcing stimuli.
In this model the dimensions represent the positive reinforcers,
the negative reinforcers and the emotion content present. The
positive reinforcer increases the probability of emission of a
response on which it is contingent, whereas the negative
reinforcer increases the probability of emission of a response
that causes the reinforcer to be omitted. This model increases
the number of emotions, that can be classified. Presently this
model can classify 22 emotions subject to the presence of a
facial expression database. It has the flexibility to increase
upon the number of emotions. For emotion (pattern)
identification, the pose and illumination factor are removed
using Gabor wavelet transforms and the size is reduced by
finding its principle components (PCA). This component
vector is used for training the neural network. The test result
shows the recognition accuracy of 85.7% on The Cohn-Kanade
Action Unit Coded Facial Expression Database. The real time
processing for identification, aids in applying emotions to real
time audio player. An environment, that is all pervasive or
ubiquitous, that would sense one’s mental state and play the
appropriate musical track to maintain the positive emotional
state or ease from a negative emotional state.
Index Terms— Cognitive model, Emotions, 3D architecture,
reinforcing stimuli.
Manuscript received November 9, 2006.
Maringanti Hima Bindu is an Assistant Professor with the Indian Institute
of Information Technology, Allahabad, India 211011 (phone: +91-532-
2922096,+91-9335070621; fax: +91-532-2430006; e-mail: mhimabindu@
iiita.ac.in).
Priya Gupta was with Indian Institute of Information Technology,
Allahabad, India 211011 as a post graduate student.
Prof.U.S.Tiwary is with the Indian Institute of Information Technology,
Allahabad, India 211011, also as a Dean of Academic Affairs. (e-mail:
ust@iiita.ac.in).
I. INTRODUCTION
Cognitive Science is an interdisciplinary science which
includes the mental states and processes such as thinking,
remembering, perception, learning, consciousness, emotions
etc. Of all the kinds of cognition, emotion holds the key for
human social behavior. Identification and classification of
emotions by computers has been a research area since Charles
Darwin’s age. These actions could be performed with the help
of facial images, blood pressure measurement, pupillary
dilation, facial expressions and many more quantifiable
attributes of humans [1].
Facial expression recognition is an area which poses great
challenges for the researchers. It is an area where a lot has
been done and a lot more can be done. Facial expression
recognition is not a theoretical field but finds practical
applications in many fields. Coupled with human psychology
and neuroscience it can come up as an area which can bridge
the divide between the more abstract area of psychology and
the more crisp area of computations.
The characteristic feature points [10] of a face are located at
eyebrows, eyelids, cheeks, lips, chin and forehead. The feature
points after being extracted from these regions help in
recognizing the various expressions of a face. The first and the
most important step in feature detection is to track the position
of the eyes. Thereafter, the symmetry property of the face with
respect to the eyes is used for tracking rest of the features like
eyebrows, lips, chin, cheeks and forehead. Splitting face into
two halves eases the process further. This paper uses Discrete
Hopfield Networks as the basis for pre-processing of the
Face-Signal and then Feature Extraction.
A number of techniques have been proposed in this field
and are being used which include Bayesian Classification
[11],Gabor Wavelet Transform [12], Principle Component
Analysis, HMM [13], Line-based Caricatures [14], Method of
Optical Flow Analysis [15] etc. But they have an inherent
complexity which makes them opaque and are
computationally expensive.
Apart from emotion classification and identification, the
response from the computers is also being generated for
making the human-computer interaction livelier. The field of
affective computing has emerged for this kind of
Cognitive Model – Based Emotion Recognition
From Facial Expressions For Live Human
Computer Interaction
Maringanti Hima Bindu, Priya Gupta, and U.S.Tiwary, Senior Member, IEEE
351
Proceedings of the 2007 IEEE Symposium on Computational
Intelligence in Image and Signal Processing (CIISP 2007)
1-4244-0707-9/07/$25.00 ©2007 IEEE