Face Recognition with 3D Model-Based Synthesis Xiaoguang Lu 1 , Rein-Lien Hsu 1 , Anil K. Jain 1 , Behrooz Kamgar-Parsi 2 , and Behzad Kamgar-Parsi 2 1 Michigan State University, East Lansing, MI 48824. {lvxiaogu, hsureinl, jain}@cse.msu.edu 2 Office of Naval Research, 800 N. Quincy St., Arlington, VA 22217. Abstract. Current appearance-based face recognition system encoun- ters the difficulty to recognize faces with appearance variations, while only a small number of training images are available. We present a scheme based on the analysis by synthesis framework. A 3D generic face model is aligned onto a given frontal face image. A number of synthetic face images are generated with appearance variations from the aligned 3D face model. These synthesized images are used to construct an affine subspace for each subject. Training and test images for each subject are represented in the same way in such a subspace. Face recognition is achieved by minimizing the distance between the subspace of a test sub- ject and that of each subject in the database. Only a single face image of each subject is available for training in our experiments. Preliminary experimental results are promising. 1 Introduction After decades of research [1], face recognition is still a very challenging topic. Current systems can achieve a good performance when the test image is taken under similar conditions as the training images. However, in real applications, a face recognition system may encounter difficulties with intra-subject facial vari- ations due to varying lighting conditions, different head poses and facial expres- sions. Most of the face recognition methods are appearance-based [2–6] which require that several training samples be available under different conditions for each subject. However, only a small number of training images, are generally available for a subject in real applications, which can not capture all the facial variations. A human face is a 3D elastic surface, so the 2D image projection of a face is very sensitive to the changes in head pose, illumination, and facial expression. Utilizing 3D facial information is a promising way to deal with these variations [5–12]. Adopting Waters’ animation model [9] as our generic face model, we propose a face recognition system that synthesizes various facial variations to augment the given training set which contains only a single frontal face image for each subject. Both the training and test images are subjected to the model adaptation and synthesis in the same way. We use the synthetic variations to