958 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 40, NO. 3, JUNE 2010
Regularized Locality Preserving Projections and Its
Extensions for Face Recognition
Jiwen Lu and Yap-Peng Tan, Senior Member, IEEE
Abstract—We propose in this paper a parametric regularized locality
preserving projections (LPP) method for face recognition. Our objective
is to regulate the LPP space in a parametric manner and extract useful
discriminant information from the whole feature space rather than a re-
duced projection subspace of principal component analysis. This results in
better locality preserving power and higher recognition accuracy than the
original LPP method. Moreover, the proposed regularization method can
easily be extended to other manifold learning algorithms and to effectively
address the small sample size problem. Experimental results on two widely
used face databases demonstrate the efficacy of the proposed method.
Index Terms—Face recognition, locality preserving projections (LPP),
manifold learning, regularization, small sample size (SSS) problem.
I. I NTRODUCTION
During the past two decades, appearance-based face recognition
has been extensively studied, and many algorithms have been pro-
posed. The most representative algorithms include principal compo-
nent analysis (PCA) [1] and linear discriminant analysis (LDA) [2].
While these two algorithms have attained reasonably good perfor-
mance in face recognition, they may fail to discover the underlying
nonlinear manifold structure as they seek only a compact Euclidean
subspace for efficient face representation and recognition.
Recently, a number of manifold learning algorithms have been pro-
posed to discover the geometric property of high-dimensional feature
spaces, and they have been successfully applied to face recognition.
The most representative such algorithm is locality preserving projec-
tions (LPP) [3]. As LPP is originally unsupervised, more recent work
has included supervised information into the formulation and derived
many discriminant-based LPP algorithms, such as discriminant LPP
(DLPP) [4], orthogonal DLPP (ODLPP) [5], and uncorrelated DLPP
(UDLPP) [6], to enhance the recognition performance. Despite some
differences in their objective functions, these discriminant extensions
adopt a similar formulation and lead to the following generalized
eigenvalue equation:
S
1
w = λS
2
w (1)
where S
1
and S
2
are two matrices to be minimized and maximized,
respectively. Table I shows these matrices for LPP and its three
discriminant extensions. (The definitions of matrices L, D, H, W , and
G can be deduced from the respective papers [4]–[6].)
In fact, S
2
is usually singular in face recognition, which stems from
the fact that the number of training images is usually much smaller than
the dimension of each image, a deficiency that is generally known as
small sample size (SSS) problem.
One possible way to address the SSS problem is by performing PCA
projection to reduce the dimension of the feature space and make S
2
Manuscript received January 16, 2009; revised May 15, 2009 and August 12,
2009. First published November 10, 2009; current version published June 16,
2010. This work was supported in part by the Nanyang Technological Univer-
sity Research Grant RGM20/06.
The authors are with the School of Electrical and Electronic Engineer-
ing, Nanyang Technological University, Singapore 639798 (e-mail: lujiwen@
pmail.ntu.edu.sg; eyptan@ntu.edu.sg).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TSMCB.2009.2032926
TABLE I
S
1
AND S
2
FOR LPP AND I TS THREE DISCRIMINANT EXTENSIONS
nonsingular, such as the Laplacianface method in [3] and the DLPP
method in [4]. However, there are two shortcomings in this solution.
1) The recognition accuracy depends very much on the dimension
of the reduced PCA subspace, and how to determine the optimal
dimension of this subspace remains largely an open problem.
2) Some useful information for LPP may be compromised in the
intermediate PCA stage. To illustrate this point, we provide the
following justification.
In [3], the locality preserving power of a projection w was defined as
f (w)=
w
T
XLX
T
w
w
T
XDX
T
w
. (2)
In general, the smaller the function f (w) is, the better the locality
preserving power of the projection w. As the LPP in [3] can be
obtained by imposing a normalization constraint w
T
XDX
T
w =1,
(2) is reduced to
f (w)= w
T
XLX
T
w (3)
since XLX
T
is usually singular [3] in face recognition, which implies
that the null space of XLX
T
contains useful discriminant information
for LPP, and such information could be lost in the PCA projection
when f (w)=0. Hence, preprocessing high-dimensional features us-
ing PCA is arguably not an optimal solution to the SSS problem in LPP.
Nevertheless, the SSS problem has been extensively addressed in
classical LDA, and many solutions have been proposed. For example,
Friedman [7] and Lu et al. [10] applied a small perturbation to
the within-class scatter; Dai and Yuen [8] proposed a regularized
discriminant analysis (RDA) algorithm to regulate the whole space of
the within-class scatter matrix; Jiang et al. [9] put forward an eigenfea-
ture regularization and extraction (ERE) approach to decompose the
within-class scatter space into three different subspaces and regulate
the eigenspectrum using a piecewise function. All these algorithms
can enhance the recognition performance of LDA to a certain extent.
There is, however, less work on addressing the SSS problem in LPP
and its variants. While many manifold learning algorithms have been
developed, not much attention has been devoted to the corresponding
SSS problem.
Some existing algorithms apply PCA to avoid the SSS problem.
However, PCA cannot be an optimal solution as we have discussed
before. Some algorithms ignore the SSS problem ([5] and [6]), and
they may not work as effectively when the sample size is small.
We propose here an effective parametric regularized LPP (PRLPP)
algorithm to overcome or mitigate the SSS problem. Our contribution
consists of the following:
1) attaining an efficient subspace which has more locality preserv-
ing power than LPP;
2) applying a parametric regularization approach instead of PCA
to address the SSS problem for LPP and extending this regular-
ization technique to other manifold learning algorithms such as
DLPP, ODLPP,and UDLPP.
The rest of this paper is organized as follows. Section II for-
mally formulates the proposed PRLPP algorithm and its extensions.
1083-4419/$26.00 © 2009 IEEE