A Low Dimensionality Expression Robust Rejector for 3D Face Recognition Jiangning Gao, Mehryar Emambakhsh, Adrian N. Evans Department of Electronic and Electrical Engineering University of Bath Bath, UK {J.Gao, M.Emambakhsh, A.N.Evans}@bath.ac.uk Abstract—In the past decade, expression variations have been one of the most challenging sources of variability in 3D face recognition, especially for scenarios where there are a large number of face samples to discriminate between. In this paper, an expression robust rejector is proposed that first robustly locates landmarks on the relatively stable structure of the nose and its environs, termed the cheek/nose region. Then, by defining curves connecting the landmarks, a small set of features (4 curves with only 15 points each) on the cheek/nose surface are selected using the Bosphorus database. The resulting rejector, which can quickly eliminate a large number of candidates at an early stage, is further evaluated on the FRGC database for both the identification and verification scenarios. The classification performance using only 60 points from 4 curves shows the effectiveness of this efficient expression robust rejector. Keywords—biometrics; face recognition; pattern rejection; Feature selection I. INTRODUCTION Facial expression is an intrinsic source of variability in human faces that deforms the face surface to a greater or lesser extent, according to the type or degree of expression. Variations in expression present a significant challenge to 3D face recognition algorithms and, consequently, the development of recognition techniques that are robust to these variations has been a challenging research topic over the past decade. Some relatively stable structures and patches on the face can be used to help design expression invariant face recognition algorithms. A good example is the nose [1-3] which, compared to the forehead, mouth and eyes, is more consistent over different expressions. The nose is also very difficult to deliberately occlude by hair, hands and scarves [3]. In addition, using convexity it is relatively straightforward to detect and segment the central region of the 3D face that contains the nose [2]. Therefore, the extraction of salient features on the nose region offers many advantages for expression robust 3D face recognition. Recent 3D face recognition work using the nose region has drawn 28 curves joining 16 robustly detected landmarks on the nose [2]. However, this work only considered landmarks and curves directly on the nasal surface and the regions adjoining the nose, between the cheek bones and nasal bridge, were not included. These regions are also relatively more stable and less affected by occlusions when compared to other patches on the face surface and contain additional discriminative features to those on the nose. Chang et al. [1] matched multiple overlapping 3D regions containing the nose and its surroundings and obtained good recognition performances, thus demonstrating that this region has much potential for providing expression robust biometric features. Wang et al. [4] also explored the nose and its surrounding region to build a more efficient classifier. Therefore, extending the nasal region to include adjacent areas is capable of achieving a high classification performance for expression robust 3D face recognition. In this paper, the nose and its environs are defined as the “cheek/nose” region, from which features are selected to form the feature space. In addition to overcoming the problems caused by variations in expression, another challenge for 3D face recognition is the ability to discriminate between a large number of classes. Pattern rejection, proposed by Baker and Nayar [5], is an efficient and effective way to improve the classification performance, particularly for this scenario. Mian et al. [6] used a 3D Spherical Face Representation (SFR) combined with a Scale-Invariant Feature Transform (SIFT) descriptor to form a rejector whose combined classification performance is very high. However, it required both 2D and 3D features and if only the 3D SFR features are considered the rejector’s performance is much reduced. In this paper, a low dimensionality expression robust rejector for 3D face recognition is proposed. A block diagram of the rejector is shown in Fig. 1. After pre-processing, 24 landmarks are localized on the cheek/nose region and a set of 113 curves joining the landmarks are defined. Feature selection using the Bosphorus database identifies just 4 curves, and further experiments determine that each curve only requires 15 points, to produce a rejector that quickly and effectively eliminates a large number of ineligible candidate faces from the gallery. II. 3D DATABASES AND PREPROCESSING A. The Bosphorus and FRGC 3D Face Databases Two databases are used in this work. The Bosphorus database comprises of 105 subjects with a total of 4666 captures and contains a rich set of expressions, including different action units of the Facial Action Coding System and six universal emotional expressions [7]. This database is used for feature selection.