C. JayaMohan et al./ Elixir Comp. Sci. & Engg. 55A (2013) 13251-13254 13251
Introduction
FACE recognition has been studied for several decades.
Comprehensive reviews of the related works can be found in
[14], [21]. Even though the 2-D face recognition methods have
been actively studied in the past, there are still some inherent
problems to be resolved for practical applications. It was shown
that the recognition rate can drop dramatically when the head
pose and illumination variations are too large, or when the
face images involve expression variations. Pose, illumination,
and expression variations are three essential issues to be dealt
with in the research of face recognition. To date, there was not
much research effort on overcoming the expression variation
problem in face recognition, though a number of algorithms
have been proposed to overcome the pose and illumination
variation problems. To improve the face recognition accuracy,
researchers have applied different dimension reduction
techniques, including principle component analysis (PCA) [3],
linear discriminant analysis (LDA) [13], independent component
analysis (ICA) [1], discriminant common vector (DCV) [2],
kernal- PCA, kernal-LDA [5], kernal-DCV [10], etc. In addition,
several learning techniques have been used to train the
classifiers for face recognition, such as SVM. Although
applying an appropriate dimension reduction algorithm or a
robust classification technique may yield more accurate
recognition results, they usually require multiple training images
for each subject. However, multiple training images per subject
may not be available in practice.
This paper focuses mainly on the issue of robustness to
expression and lighting variations. For example, a face
Verification system for a portable device should be able to
verify a client at any time (day or night) and in any place
(indoors or outdoors). Traditional approaches for dealing with
this issue can be broadly classified into three categories:
appearance-based, normalization based, and feature-based
methods. In direct appearance-based approaches, training
examples are collected under different lighting conditions and
directly (i.e. without undergoing any lighting preprocessing)
used to learn a global model of the possible illumination
variations.
The other category is to use optical flow to compute the
face warping transformation. Optical flow has been used in the
task of expression recognition [4], [8]. However, it is difficult to
learn the local motion in the feature space to determine the
expression change for each face, since different persons have
expressions in different motion styles. Martinez [15] proposed
a weighting method that independently weighs the local areas
which are less sensitive to expressional changes. The intensity
variations due to expression may mislead the calculation of
optical flow. A precise motion estimation method was proposed
in [14], which can be further applied for expression recognition.
However, the proposed motion estimation did not consider
intensity changes due to different expressions.
In this paper, we focus on the problem of face recognition
from a single 2-D face image with facial expression. Note that
this paper is not about facial expression recognition. For many
practical face recognition problem settings, like using a passport
photo for face identification at custom security or identifying a
person from a photo on the ID card, it is infeasible to gather
multiple training images for each subject, especially with
different expressions. Therefore, our goal is to solve the
expressive face recognition problem under the condition that the
training database contains only neutral face images with one
neutral face image per subject. In our previous work [11], we
combined the advantages of the above two approaches: the
unambiguous correspondence of feature point labeling and the
flexible representation of optical flow computation. A
constrained optical flow algorithm was proposed, which can deal
with position movements and intensity changes at the same time
when handling the corresponding feature.
Algorithm, we can calculate the expressional motions from
each neutral faces in the database to the input test image, and
estimate the likelihood of such a facial expression movement.
Tele:
E-mail addresses: alphacse138@yahoo.com
© 2013 Elixir All rights reserved
Face recognition under expressions and lighting variations using artificial
intelligence and image synthesizing
C.JayaMohan, M.Saravana Deepak, M.L.Alphin Ezhil Manuel and D.C Joy Winnie Wise
Department of CSE, Alpha College of Engineering, Chennai, T.N, India.
ABSTRACT
In this paper, we propose an integrated face recognition system that is robust against facial
expressions by combining information from the computed intra-person optical flow and the
synthesized face image in a probabilistic framework. Making recognition more reliable
under uncontrolled lighting conditions is one of the most important challenges for practical
face recognition systems. We tackle this by combining the strengths of robust illumination
normalization. Our experimental results show that the proposed system improves the
accuracy of face recognition from expressional face images and lighting variations.
© 2013 Elixir All rights reserved.
ARTICLE INFO
Article history:
Received: 12 September 2012;
Received in revised form:
1 February 2013;
Accepted: 19 February 2013;
Keywords
Face recognition,
Constrained optical flow,
Artificial intelligence,
Synthesized image,
Masked synthesized image.
Elixir Comp. Sci. & Engg. 55A (2013) 13251-13254
Computer Science and Engineering
Available online at www.elixirpublishers.com (Elixir International Journal)