Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2006) M.-P. Cani, J. O’Brien (Editors) Motion Templates for Automatic Classification and Retrieval of Motion Capture Data Meinard Müller and Tido Röder Department of Computer Science, University of Bonn, Germany Abstract This paper presents new methods for automatic classification and retrieval of motion capture data facilitating the identification of logically related motions scattered in some database. As the main ingredient, we introduce the concept of motion templates (MTs), by which the essence of an entire class of logically related motions can be captured in an explicit and semantically interpretable matrix representation. The key property of MTs is that the variable aspects of a motion class can be automatically masked out in the comparison with unknown motion data. This facilitates robust and efficient motion retrieval even in the presence of large spatio-temporal variations. Furthermore, we describe how to learn an MT for a specific motion class from a given set of training motions. In our extensive experiments, which are based on several hours of motion data, MTs proved to be a powerful concept for motion annotation and retrieval, yielding accurate results even for highly variable motion classes such as cartwheels, lying down, or throwing motions. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Animation 1. Introduction The typical life cycle of a motion capture clip in the conven- tional production of computer-generated animations is very short: after some rehearsal, a motion clip is captured, in- corporated in a single 3D scene, and then never used again. For reasons of flexibility, efficiency, and cost, much research on motion reuse for off-line and on-line synthesis of new motions from prerecorded motion data has been conducted. Here, the identification and extraction of logically related motions scattered within a large data set arises as a major problem. Such automatic methods for comparison, classifi- cation, and retrieval of motion data also play an important role in fields such as sports sciences, biometrics, medicine, and computer vision. One major problem in content-based comparison of mo- tion data is that logically similar motions need not be nu- merically similar, see [KG04]. In other words, there are cer- tain aspects associated with a motion class that may show significant spatio-temporal variations between different exe- cutions of the motion, while other aspects are typically con- sistent. Like a fingerprint, these consistent aspects form the very essence of the motion class. In this paper, we propose a novel method for capturing the spatio-temporal characteris- tics of an entire motion class in a compact matrix represen- tation called a motion template (MT). Given a set of training motions representing a motion class, a motion template that explicitly encodes the consistent and the variable aspects of the motion class can be learned. In addition, motion tem- plates have a direct, semantic interpretation: an MT can eas- ily be edited, manually constructed from scratch, combined with other MTs, extended, and restricted, thus providing a great deal of flexibility. Based on our matching techniques, motion templates pro- vide a fully automatic way of retrieving logically related motion segments from a large database and classifying or annotating segments of unknown motions. Here, a key con- tribution of our paper is to automatically exclude the vari- able aspects of a motion in the matching process while fo- cusing on the consistent aspects—it is this idea that allows us to identify logically related motions even in the presence of large variations. This strategy can also be viewed as an automatic way of selecting appropriate features for the com- parison in a locally adaptive fashion. In our experiments, we used qualitative boolean features that express suitable geo- metric relations between parts of the human body, as intro- duced by Müller et al. [MRC05]. As an important advantage, c The Eurographics Association 2006.