3DR EXPRESS Learning Motion Features for Example-Based Finger Motion Estimation for Virtual Characters Christos Mousas . Christos-Nikolaos Anagnostopoulos Received: 8 May 2017 / Revised: 6 July 2017 / Accepted: 11 July 2017 Ó 3D Research Center, Kwangwoon University and Springer-Verlag GmbH Germany 2017 Abstract This paper presents a methodology for estimating the motion of a character’s fingers based on the use of motion features provided by a virtual character’s hand. In the presented methodology, firstly, the motion data is segmented into discrete phases. Then, a number of motion features are computed for each motion segment of a character’s hand. The motion features are pre-processed using restricted Boltzmann machines, and by using the different variations of semantically similar finger gestures in a support vector machine learning mech- anism, the optimal weights for each feature assigned to a metric are computed. The advantages of the presented methodology in comparison to previous solutions are the following: First, we automate the computation of optimal weights that are assigned to each motion feature counted in our metric. Second, the presented methodology achieves an increase (about 17%) in correctly estimated finger gestures in com- parison to a previous method. Keywords Finger motion Motion estimation Character animation Motion features Features pre- processing Metric learning 1 Introduction In character animation research, a variety of method- ologies have been proposed to synthesize the motion of a virtual character as naturally as possible. How- ever, the realism of the character’s motion is not only related to its general representation (i.e., the full-body motion of a character), but it also depends on the details that appear on the character’s body. For instance, facial expressions and finger motion enhance the appearance of a motion sequence. The aforemen- tioned notations have been expressed in various perceptual evaluation studies, such as [1–4]. If highly realistic motion data is required, the motion of the fingers should be captured simultane- ously with the full-body motion of a performer. With motion capture systems, it is too difficult to capture the full-body motion of an actor while also capturing facial expressions and finger gesture details. This is especially true when the actor that is captured performs locomotion tasks (i.e., moving within the capturing area space). For that reason, three basic Electronic supplementary material The online version of this article (doi:10.1007/s13319-017-0136-9) contains supple- mentary material, which is available to authorized users. C. Mousas (&) Graphics and Entertainment Technology Lab, Department of Computer Science, Southern Illinois University, Carbondale, IL 62901, USA e-mail: christos@cs.siu.edu C.-N. Anagnostopoulos Department of Cultural Technology and Communication, University of the Aegean, 81100 Mytilene, Greece e-mail: canag@ct.aegean.gr 123 3D Res (2017) 8:25 DOI 10.1007/s13319-017-0136-9