Illumination and Expression Invariant Face Recognition with One Sample Image Shaokang Chen Brian C. Lovell Intelligent Real-Time Imaging and Sensing(IRIS) Group The School of Information Technology and Electrical Engineering The University of Queensland, Australia QLD 4072 shaokang@itee.uq.edu.au lovell@itee.uq.edu.au Abstract Most face recognition approaches either assume con- stant lighting condition or standard facial expressions, thus cannot deal with both kinds of variations simultaneously. This problem becomes more serious in applications when only one sample images per class is available. In this paper, we present a linear pattern classification algorithm, Adap- tive Principal Component Analysis (APCA), which first ap- plies PCA to construct a subspace for image representation; then warps the subspace according to the within-class co- variance and between-class covariance of samples to im- prove class separability. This technique performed well un- der variations in lighting conditions. To produce insensitiv- ity to expressions, we rotate the subspace before warping in order to enhance the representativeness of features. This method is evaluated on the Asian Face Image Database. Experiments show that APCA outperforms PCA and other methods in terms of accuracy, robustness and generaliza- tion ability. 1. Introduction Within the last several years, research on face recogni- tion has been focused on diminishing the impact of changes in lighting conditions, facial expression and poses. Two main approaches have been proposed for illumination in- variant face recognition. One is to represent images with features that are less sensitive to illumination change such as the edge maps of an image. But edge features generated from shadows are related to illumination changes and may have a significant impact on recognition. The other main approach supposes that the surface of human faces is Lam- bertian reflected and convex and tries to construct a low di- mensional linear subspace for face images taken under dif- ferent lighting conditions [3]. But it is hard for these sys- tems to deal with cast shadows. Furthermore, these systems need several images of the same face taken under specific lighting source directions to construct a model of a given face. In many cases, it is hard to meet this requirement, such as recognizing face images from historic photographs. As for expression invariant face recognition, one ap- proach is to morph images to be the same shape as the one used for training. But it is not guaranteed that all images can be morphed correctly — for example an image with closed eyes cannot be morphed to a neutral image because of the lack of texture inside the eyes. Another approach is to use optical flow. However, it is difficult to learn the local motions within feature space to determine the expression changes of each face, since different persons express a cer- tain expression with different ways. Martinez [6] proposed a weighting method that weights independently those local areas which are less sensitive to expressional changes. But features that are insensitive to expression changes may be sensitive to illumination changes as noted in [5]. Previous methods dealing with illumination or facial ex- pression variations cannot compensate for both variations simultaneously. We present a new method, Adaptive Prin- cipal Component Analysis (APCA) to warp the face space by whitening and filtering eigen features according to the second order statistics of the samples. We further improve APCA by space rotation to enhance the representativeness of features. Experiments show that our method outperforms PCA [1] and Fisher Linear Discriminant (FLD) [2] on face recognition with both illumination and expression changes. 2. Adaptive Principal Component Analysis We first apply Principal Component Analysis (PCA) [1] for feature abstraction because of its good generalization ca- pacity. We choose to use raw data as samples for PCA since preprocessing such as edge maps might introduce features that are highly sensitive to certain facial variations. Conse- quently, every face image can be projected into a subspace with reduced dimensionality to form an m-dimensional fea- ture vector s j,k with k =1, 2, ···K j denoting the k th sample of the class S j . Proceedings of the 17th International Conference on Pattern Recognition (ICPR’04) 1051-4651/04 $ 20.00 IEEE