Interactive Region-Based Linear 3D Face Models J. Rafael Tena * Disney Research Pittsburgh Fernando De la Torre The Robotics Institute Carnegie Mellon University Iain Matthews Disney Research Pittsburgh (a) (b) (c) (d) (e) Figure 1: Face posing using interactive region-based (b) and holistic (d) face models. The models drive the human character shown in (a). User-given constraints (black markers) create a wink with a smirk, when issued to the region-based model (b and c). In contrast, the same constraints produce uncontrolled global deformations when the holistic model is used (d and e). Abstract Linear models, particularly those based on principal component analysis (PCA), have been used successfully on a broad range of hu- man face-related applications. Although PCA models achieve high compression, they have not been widely used for animation in a pro- duction environment because their bases lack a semantic interpreta- tion. Their parameters are not an intuitive set for animators to work with. In this paper we present a linear face modelling approach that generalises to unseen data better than the traditional holistic approach while also allowing click-and-drag interaction for anima- tion. Our model is composed of a collection of PCA sub-models that are independently trained but share boundaries. Boundary con- sistency and user-given constraints are enforced in a soft least mean squares sense to give flexibility to the model while maintaining co- herence. Our results show that the region-based model generalises better than its holistic counterpart when describing previously un- seen motion capture data from multiple subjects. The decompo- sition of the face into several regions, which we determine auto- matically from training data, gives the user localised manipulation control. This feature allows to use the model for face posing and animation in an intuitive style. CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Animation; Keywords: Face modelling, animation, linear model, piece-wise model, interactive model * e-mail:rafael.tena@disneyresearch.com e-mail:ftorre@cs.cmu.edu e-mail:iainm@disneyresearch.com Links: DL PDF 1 Introduction Linear models, particularly those based on principal component analysis (PCA), have been used successfully on a broad range of human face-related applications, examples include Active Appear- ance Models [Cootes et al. 1998; Matthews and Baker 2004] and 3D Morphable Models [Blanz and Vetter 1999]. In the produc- tion of computerised facial animation, a common practice is to use blendshape animation models (or rigs). These models aim to repre- sent a given facial configuration as a linear combination of a prede- termined subset of facial poses that define the valid space of facial expressions [Bergeron and Lachapelle 1985; Pighin et al. 1998]. PCA and blendshape models differ from each other only in the na- ture of their basis vectors. The bases are orthogonal and lack a semantic meaning in PCA, versus non-orthogonal with an artist de- fined and interpretable meaning for blendshape models. Although PCA models achieve high compression, they are not gen- erally used for animation because their bases lack semantic inter- pretation. Their parameters are not an intuitive set for animators to work with. This is typically not the case for blendshape models. However, until recently there were few published methods to ma- nipulate blendshape models other than directly specifying the blend weights [Lewis and Anjyo 2010; Joshi et al. 2003]. The work of