Partial Match of 3D Faces using Facial Curves between SIFT Keypoints Stefano Berretti, Alberto Del Bimbo and Pietro Pala Dipartimento di Sistemi e Informatica, University of Firenze, Italy Abstract In this work, we propose and experiment an original solution to 3D face recognition which supports partial match- ing of facial scans as occurs in the case of missing parts and occlusions. In the proposed approach, distinguishing traits of the face are captured by first extracting SIFT keypoints on the face scan and then measuring how the face changes along facial curves defined between pairs of keypoints. Facial curves are also associated with a measure of salience so as to distinguish curves that model characterizing traits of some subjects from curves that are frequently observed in the face of many different subjects. The recognition accuracy of the approach has been experimented on the Face Recognition Grand Challenge dataset. Categories and Subject Descriptors (according to ACM CCS): I.3.8 [Computer Graphics]: Applications— I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Curve, surface, solid, and object repre- sentations 1. Introduction Automatic recognition of human faces is a challenging com- puter vision task especially in presence of illumination vari- ations or in the case parts of the face are missing. Recently, the availability of 3D facial data acquired with scanner de- vices has increased the interest in 3D face recognition solu- tions that are expected to feature less sensitivity to pose and illumination changes. Based on this, many 3D face recog- nition approaches have been proposed and experimented in the last years [BCF06], [BDP10a]. In summary, these ap- proaches can be grouped in two broad categories: global (or holistic), that perform face matching based on representa- tions extracted from the whole face; and local (or region- based), that partition the face surface into regions, and ex- tract and match appropriate descriptors for each of them. Many of these approaches have been designed to support face recognition also in presence of expression variations reporting very high accuracy on benchmark databases like the Face Recognition Grand Challenge (FRGC version 2.0 dataset) [PFS 05]. However, a problem that can substan- tially affect the accuracy of recognition and that has not been extensively addressed is the effect induced by missing parts and partial occlusions of the face. More generally the prob- lem of supporting the recognition of a subject when only a part of her/his facial scan is available. Missing parts can be determined by self-occlusions of the face due to pose vari- ations, whereas face occlusions are likely to occur in real applications due to the hair, glasses, scarves or caps. The effects of face occlusions have been first studied in 2D face recognition applications. In 3D, just few solu- tions have explicitly considered the problems posed by miss- ing parts and occlusions in the design and experimentation of recognition methods. In general, global approaches can- not effectively manage these conditions, instead local ap- proaches have the potential to address partial face matching. In [PPT 09], an automatic face landmarks detector is used to identify the pose of the facial scan so as to mark regions of missing data and to roughly register the facial scan with an Annotated Face Model (AFM). The AFM is fitted using a deformable model framework that exploits facial symme- try where data are missing. Wavelet coefficients extracted from a geometry image derived from the fitted AFM are used for the match. Experiments have been performed using the FRGC v2.0 gallery scans and side scans with 45 and 60 rotation angles as probes. In [DBDS10], the facial sur- face is represented as a collection of radial curves originating from the nose tip and face comparison is obtained by elastic matching of the curves. A quality control permits the exclu- sion of corrupted radial curves from the match, thus enabling the recognition also in the case of missing data. Results of partial matching are given for the 61 left and right side scans of the Gavab database. Many local approaches are limited by the need to iden- tify facial landmarks used to define the interesting parts in matching faces. Methods that use keypoints of the face promise to solve some of these limitations. In particular, a c The Eurographics Association 2011. Eurographics Workshop on 3D Object Retrieval (2011) H. Laga, T. Schreck, A. Ferreira, A. Godil, I. Pratikakis, and R. Veltkamp (Editors) DOI: 10.2312/3DOR/3DOR11/117-120