MOVING FACIAL IMAGE TRANSFORMATIONS BASED ON STATIC 2D PROTOTYPES Bernard Tiddeman and David Perrett Perception Laboratory, Department of Psychology University of St Andrews, St Andrews, Fife KY16 9JU UK {bpt,dp}@st-and.ac.uk http://www.perceptionlab.com/ ABSTRACT This paper describes a new method for creating visually realistic moving facial image sequences that retain an actor's personality (individuality, expression and characteristic movements) while altering the facial appearance along a certain specified facial dimension. We combine two existing technologies, facial feature tracking and facial image transformation, to create the sequences. Examples are given of transforming the apparent age, race and gender of a face. We also create 'virtual cartoons' by transforming image sequences into the style of famous artists. The results show that static 2D face models can be used to create realistic transformations of sequences that include changes in pose, expression and mouth shape. Keywords: facial image transformation, facial feature tracking, image processing. 1. INTRODUCTION The use of computer graphics for altering the apparent age, sex or race of static facial images is not only entertaining but has found application in psychology, medicine, forensics and education. The extension of these methods to moving facial images has the potential to enhance and expand these applications. In this paper we show that an existing facial transformation method based on static 2D prototype ('average') facial images, combined with a suitable face tracking method, can produce excellent results. In addition, we extend the range of transformations to include various artistic styles, allowing the construction of 'virtual cartoons'. 2. PREVIOUS WORK The synthesis and animation of facial images has long been of interest in computer graphics research. Early face models used geometrical methods [Parke82] [Duffy88] [Thalm89], which have been extended to include physics-based models of facial tissues [Terzo93] [Koch96] [Lee00] and statistical models of normal face variation [DeCar98] [Blanz99] [Tidde99] [Vette97]. The animation of geometrical models can be performed by morphing between predefined expression components [Parke82] [Duffy88] [Thalm89] [Pighn98]. For physics-based face models, animation can be performed using numerical simulation of muscle actions [Water87][Lee95]. Many face-tracking algorithms have been devised including those based on optical flow constraints [Mase91][DeCar00], active shape models (ASM) [Baumb96][Edwar98] or energy minimising point tracking techniques [Lucas81] [Lien00]. The reconstruction of tracked facial feature movements by 'virtual actors' has application in video telecommunication because of the potential for low-bandwidth communication [Choi91] [Choi94]. This kind of technology has also been used to lip- synch computer graphic animations with an actor's voice and movements for film entertainment or virtual avatars [Berge85] [Willi90] [Bregl97] [Essa96] [Guent98][Ezzat00]. The animated characters are either a direct clone of the original (in the case of low-bandwidth communication) or are designed by a 3D-computer artist or computer