Automatic features detection for overlapping face images on their 3D range models Raffaella Lanzarotti, Paola Campadelli Dipartimento di Scienze dell’Informazione Universit` a degli Studi di Milano Via Comelico, 39/41 20135 Milano, Italy lanzarotti, campadelli @dsi.unimi.it N. Alberto Borghese INB, Lab. Human Motion Analysis and Virtual Reality, CNR c/o LITA Via Cervi, 93 20090 Segrate (Mi), Italy borghese@inb.mi.cnr.it Abstract We describe an algorithm for the automatic features de- tection in 2D color images of human faces. The algorithm proceeds with subsequent refinements. First, it identifies the sub-images containing each feature (eyes, nose and lips). Afterwards, it processes the single features separately by a blend of techniques which use both color and shape infor- mation. The method does not require any manual setting or operator intervention. 1 Introduction Human faces are characterized by both 3D geometrical shape and color appearance. Therefore, to reproduce with high realism a 3D digital model of a face, the 3D shape and the color field have to be acquired. 3D scanners can be used to the purpose of acquiring and registering color and 3D shape information [3]. Apart from their high cost, 3D scanners require that the real human face is available at scanning time. There are many situations where this cannot be the case. For instance, when reconstructing 3D clones of celebrities [2] the 3D shape can be acquired from wax models, but the color field has to be acquired from footage or old color images which are not registered with the 3D shape. Another important case is the reproduction of hu- man face on a daily basis for web applications. In this case, it would be desirable to have a 3D shape model to which a color image could be sampled and applied on frequently and automatically. In general, a minimum of scanning ac- quisitions is preferable since they are time consuming and the setup is expensive. On the contrary, as image acquisition can be carried out quickly and with low-cost digital photo cameras, the acquisition of color images and their applica- Work supported by project “Disegno e analisi di algoritmi” (ex MURST 2000) tion to the 3D model [9][8] can be done often. In this case the 3D model and the color field have to be registered. To accomplish this task, the projective transformation realized by acquiring the 2D image has to be derived [5][13]. A min- imum of 5 points is required to compute the 9 parameters of the transformation[14]. Analyzing the surface local curvature, points on the 3D surfaces can be identified. As eyes, lips and nose all present marked curvature, they can be identified by using proper spatial filters [7]. More difficult is the extraction of the same features from 2D face images such as “tricks” have been widely used e.g. carefully controlling the internal parame- ters of the camera or using lip-stick for the lips and markers for the other features [7]. In this framework we propose a more robust method, which works on natural face images. It is based on a novel integration of standard image processing techniques which use both color and shape information. 2 Methodology The method we propose works on images of faces’ fore- grounds. We thus ignore the problem of localizing the faces in more complex scenes [10][6]. We acquire the images with homogeneous and light-colored background and only little rotations of the head are accepted (quasi-vertical and frontal position). This conditions are common to most of the face analysis systems (e.g. [15], [17]). The method works through two hierarchical processing modules: the first identifies four sub-images, each tightly containing one of the features of interest (the two eyes, the nose and the lips); the other three modules are specialized in localizing with high accuracy repere points on the lips, the nose and the eyes.