Person-Specific Face Recognition in Unconstrained Environments:
a Combination of Offline and Online Learning
Bangpeng YAO, Haizhou AI
Computer Science and Technology Department,
Tsinghua University, Beijing 100084, China
Shihong LAO
Sensing and Control Technology Laboratory,
Omron Corporation, Kyoto 619-0283, Japan
Abstract
This paper studies face recognition and person-specific
face image retrieval in unconstrained environments. The
proposed method consists of two parts: offline and online
learning. In offline stage, we take advantage of both global
and local features in a Bayesian framework for generic face
recognition. In online stage, the offline learned classifier is
adapted according to the query images of a given person,
from which a person-specific face image retriever can be
obtained. Our method is applied to the “Labeled Faces in
the Wild” dataset, which is more realistic than usual face
recognition datasets. We show that the combination of of-
fline and online learning can yield very promising results.
1. Introduction
With the popularity of internet and digital cameras,
the number of available images is increasing explosively.
Nowadays people frequently want to retrieve images depict-
ing a particular person from a large image pool, where the
images may be obtained from web or family albums. The
objective of this work is to develop a recognition based re-
trieval method to automatically achieve this goal. More-
over, provided with a small number of query images of a
given identity, we want to build a person-specific face im-
age retriever, which is especially effective for this person.
This proposed issue, face image retrieval, has many po-
tential applications, such as face-oriented web search, fam-
ily photo album management, content-based video brows-
ing, etc. It is also one of the most challenging problems in
computer vision community. Human faces captured from
real world can vary a lot (Figure 1) in many aspects, includ-
ing illumination, pose, expression, make-up, etc. Moreover,
to our best knowledge, few previous work devoted to build a
person-specific face image retriever. This is mainly because
usually only a small number of query images are available
for a given identity, which makes person-specific face mod-
eling very challenging.
Figure 1. Examples of face images in unconstrained environments.
The source images are from the “Labeled Faces in the Wild”
database [5].
In this paper, we solve this problem with a two-stage
method, which combines offline and online learning. In of-
fline stage, a large number of face images are available. So
we use the statistical method, as in traditional face recog-
nition approaches, to obtain a set of features and their as-
sociated parameters. These features can be combined into
a “Generic Face RecogNizer (GFRN)”. In order to build a
GFRN that works well in unconstrained environments, we
extract both global and local features. The global features
are obtained by applying Regularized LDA [10] to the im-
ages after Gabor filtering, and the local features are based
on Local Gabor Binary Pattern (LGBP) [22, 21], for which
we design a novel Point-to-Point Matching (PPM) method
for similarity measure. The outputs of global and local fea-
tures are fused in a Bayesian framework to build GFRN.
In online stage, given several query images of a person,
a “Person-specific Face ReTriever (PFRT)” is obtained by
selecting the most discriminant features for this person and
optimizing their parameters. In this process, the statistical
learning algorithms are not suitable because usually only a
small number of query images are available, where we lack
sufficient knowledge either for model representation or for
parameter estimation. Therefore, the sample-based meth-
ods are used here. We propose a nonparametric margin
maximization (NMM) criterion to measure each feature’s
person-specific effectiveness. The features that are espe-
cially effective for the given person are selected using a Fast
Correlation Based Filter (FCBF) [20] method.
978-1-4244-2154-1/08/$25.00 ©2008 IE