1
Real Time 3D Facial Emotion Classification using a Digital
Signal PIC Microcontroller
Ahmed FNAIECH
1
, Sami BOUZAIANE
2
, Mounir SAYADI
1
, Nicolas LOUIS
3
and Philippe GORCE
3
1
Université de Tunis, Labo SIME, ENSIT,,Tunisia
2
Naval Academy Menzel Bourguiba, 7050, Bizerte, Tunisia
3
Labo HANDIBIO, University of Toulon, Toulon, France
E-mails: ahmedfnaiech@hotmail.com, samibouzaiane@yahoo.fr, mounir.sayadi@esstt.rnu.tn,
nicolas.louis83@gmail.com,gorce@univ-tln.fr
Abstract:The human face is viewed as the mirror that
reflects the inner feelings of the person and allows us to
detect the needs of every person and study his behaviour
and requirements. We can also identify the tastes of each
person and predict his reaction through facial features.
Indeed, the detection, classification, characterization of
the face are research areas that have received
considerable interest in the recent twenty years.
However, most published works in this field study the
emotions for a person in an upright posture, which
means that the person must perfectly face the camera.
The innovation in the present work is to improve the
detection of emotions in the case of different face
orientations (to the right, left, up and down). Eight-teen
feature points that perfectly characterize the human face
are firstly calculated. An optimization step is proposed
by extracting a set of optimal distances between the
facial points as a new set of optimized emotion
descriptors. For this reason, we have used a statistical
characterization criterion based on the ratio of the intra-
class variance. An emotion classification experiment is
carried out using a multilayer neural network
implemented with a digital signal processing
microcontroller.
Key words:emotion detection, features optimization,
characterization degree, neural network classification.
1. Introduction
Facial emotion recognition is a rich field which is
actually undergoing considerable development and
vertiginous expansion, since it touches to the cognitive
state of a human being, his behaviour and reactions. In
fact, many works were interested in facial emotion
detection and classification in the 2D field and the 3D
field, but most of these have developed facial
recognition algorithms or-well sensing facial emotions
under standard conditions i.e. detect emotion for a
person in an upright posture. In other words, the person
must perfectly face the camera which may be considered
as a hindering drawback. It is often useful to know what
facial expressions correspond to each specific emotion.
The classification of facial emotions allows us to extract
important information of the human face and well
describe the cognitive state of a person. The shapes and
facial movement 2D view gives us important
information about the expressions of the human face.
Such information may vary with lighting conditions,
which has imposed serious obstacles for multiple facial
analysis 2D. In this context, the exploration and
exploitation of geometric information 3D is necessary
for solving various research issues for facial recognition.
In [1], A. Metallinou et al., have proposed an audio-
visual emotion classification using hidden Markov
models (HMM). In [3], A. Dhall et al., have proposed an
emotion recognition method using the pyramid of
histogram of gradients (PHOG) and local phase
quantization (LPQ) features, and they used SVM for
classification. I. Mpiperis et al. [17] have tested an
approach by generating a 3D model of the face which
becomes deformed elastically to correspond to facial
surface; these points were then used as a base for
classification. H. Tang and T. R. Niese et al. [19, 20]
have proposed a method based on pattern recognition
techniques for the extraction of image features. In their
work, the camera models were applied with an initial
stage of record in which the face of the person was
automatically built starting from stereo images. The
measurements of the geometrical characteristics are
calculated and standardized by using photogrammetry
techniques. J. Wang, et al. [21] have studied the utility
of the geometrical shapes of the face to represent and to
recognize the facial expressions in 3D. They also
proposed a new approach to extract the primitive
characteristics from the facial emotions. Then, they
applied the characteristic distribution in order to classify
the expressions of the face. A. Maalej et al. [22] have
used an approach based on the form analysis of local
“patches” extracts starting from 3D model faces.
Quantitative similarity measures were then obtained and
used as input parameters for the algorithms to multi-
class classification. M. Lyons et al. [23] have presented
a method where the facial expressions are coded by
using the multi-orientation and multi-resolution filter of
Gabor which are ordered and topographically aligned
roughly with the face. Their results showed that it is
possible to classify the facial expressions with the filter
of Gabor.
The novelty in the present work is to improve the
detection of emotions in informal conditions. In fact, we
consider the face reactions from various angles with a
gradual variation of the divert angle ranging from 0
degree up to 30 degrees. Each face is characterized by a
set of 18 specific points which allow the recognition of
the human face by the calculation of distances between
these points. Seven different emotions are taken account
in this study such: joy, fear, sadness, surprise, anger and
2018 IEEE International Conference on Image Processing, Applications and Systems (IPAS)
285 978-1-7281-0247-4/18/$31.00 ©2018 IEEE