Combining Classifiers in Rotated Face Space Shaokang Chen a,b , Ting Shan a,b and Brian C. Lovell a,b a NICTA, 300 Adelaide Street, Brisbane QLD 4000, Australia b ITEE, University of Queensland, Brisbane QLD 4072, Australia {Shaokang.Chen, Ting.Shan, Brian.Lovell}@nicta.com.au Abstract Face recognition is a very complex classification prob- lem due to nuisance variations in different conditions. Normally no single classifier can discriminate patterns well when unpredictable variations and a huge number of classes are involved. Combining multiple classifiers can improve discriminability over the best single classifier. In this paper, we present a way to combine classifiers for face recognition problem based on APCA classifiers. The pro- posed combinator generates various classifiers by rotating various face spaces and fusing them by applying a weighted distance measure. The combined classifier is tested on the Asian Face Database with 856 images. Experiments show a 30% reduction in classification error rate of our combined classifier and illustrates that combining classifiers from dif- ferent face spaces may perform better than those based on a single face space. 1 Introduction Face recognition is a very challenging task because dif- ferences between images of the same face due to nuisance variations, such as lighting condition, view point, pose, expression, and age are often greater than those between different faces. This problem has attracted considerable attention from psychophysicists, neuroscientists and engi- neers who wish to deal with face recognition in different conditions. Various techniques has been applied for auto- matic face recognition, such as Principal Component Anal- ysis (PCA) [19], Linear Discriminant Analysis (LDA) [2, 14], Hidden Markov Models (HMMs) [16], Neural Net- works [11, 5] and Support Vector Machines (SVMs) [6]. Nevertheless, currently most systems work well only under constrained conditions where lighting, pose, and camera pa- rameters are strictly controlled. An ideal face recognition system should recognize new images of a known face and be insensitive to nuisance variations. However, no single classifier can discriminate patterns well enough especially for complex pattern classification problems like face recog- nition, where the number of classes are huge and the varia- tion in the classes is large [10]. One way to improve system performance is to combine multiple classifiers. The motivation of combining classi- fiers is due to the observation that patterns misclassified by different classifiers may be different, even if one of them may achieve the overall best performance. Each classifier defines a mapping from a feature space to outputs (gen- erally class labels). Because there exist differences be- tween classifiers, the mappings are often disparate from each other, which may result in different performance of each classifier and dissimilar classification errors as well. Therefore, different classifiers may contain complemen- tary information on pattern classification from other clas- sifiers [8, 10]. Proper combination of these classifiers may solve the dilemma of bias and variance and enhance the per- formance [7]. We proposed a method based on Adaptive Principal Component Analysis (APCA) [3, 13] for face recognition, which is robust to face image variations in illumination and expression. We then extended it to pose invariant face recognition [17] in 2006. In this paper, we introduce a new method to combine APCA classifiers to build a stronger classifier to further improve recognition accuracy. In Sec- tion 2, we briefly explain the APCA method. Then we dis- cuss in detail a method for generating different complemen- tary base classifiers based on APCA and design a frame- work to fuse the classifiers in Section 3. Section 4 is de- voted to empirical evaluation. Finally, we draw conclusions and indicate future work in Section 5. 2 Adaptive Principal Component Analysis Adaptive Principal Component Analysis [3, 13] is a lin- ear pattern classification algorithm that inherit merits from both PCA and LDA by warping the face subspace according to the within-class and between-class covariance of sam- ples. We first apply PCA on face images to extract eigen- faces. Consequently, every face image is projected into a