Speech Communication 85 (2016) 29–42
Contents lists available at ScienceDirect
Speech Communication
journal homepage: www.elsevier.com/locate/specom
Multimodal analysis of speech and arm motion for prosody-driven
synthesis of beat gestures
Elif Bozkurt
∗
, Yücel Yemez, Engin Erzin
Multimedia, Vision and Graphics Laboratory, College of Engineering, Koç University, 34450 Sariyer, Istanbul, Turkey
a r t i c l e i n f o
Article history:
Received 11 September 2015
Revised 3 October 2016
Accepted 9 October 2016
Available online 11 October 2016
Keywords:
Joint analysis of speech and gesture
Speech-driven gesture animation
Prosody-driven gesture synthesis
Speech rhythm
Unit selection
Hidden semi-Markov models
a b s t r a c t
We propose a framework for joint analysis of speech prosody and arm motion towards automatic syn-
thesis and realistic animation of beat gestures from speech prosody and rhythm. In the analysis stage,
we first segment motion capture data and speech audio into gesture phrases and prosodic units via tem-
poral clustering, and assign a class label to each resulting gesture phrase and prosodic unit. We then
train a discrete hidden semi-Markov model (HSMM) over the segmented data, where gesture labels are
hidden states with duration statistics and frame-level prosody labels are observations. The HSMM struc-
ture allows us to effectively map sequences of shorter duration prosodic units to longer duration ges-
ture phrases. In the analysis stage, we also construct a gesture pool consisting of gesture phrases seg-
mented from the available dataset, where each gesture phrase is associated with a class label and speech
rhythm representation. In the synthesis stage, we use a modified Viterbi algorithm with a duration model,
that decodes the optimal gesture label sequence with duration information over the HSMM, given a se-
quence of prosody labels. In the animation stage, the synthesized gesture label sequence with duration
and speech rhythm information is mapped into a motion sequence by using a multiple objective unit
selection algorithm. Our framework is tested using two multimodal datasets in speaker-dependent and
independent settings. The resulting motion sequence when accompanied with the speech input yields
natural-looking and plausible animations. We use objective evaluations to set parameters of the proposed
prosody-driven gesture animation system, and subjective evaluations to assess quality of the resulting
animations. The conducted subjective evaluations show that the difference between the proposed HSMM
based synthesis and the motion capture synthesis is not statistically significant. Furthermore, the pro-
posed HSMM based synthesis is evaluated significantly better than a baseline synthesis which animates
random gestures based on only joint angle continuity.
© 2016 Elsevier B.V. All rights reserved.
1. Introduction
Gesticulation is an essential component of human commu-
nication. Speech and gestures form a composite communicative
signal that boosts the naturalness and affectiveness of the
communication. Although virtual environment designs in the
human-computer interaction (HCI) field are increasingly adopting
and emphasizing the human-centered aspect, a natural, affec-
tive and believable gesticulation is often missing in the virtual
character animations. In this context, automatic synthesis of ges-
ticulation in synchrony with speech, which incorporates nonverbal
communication components into virtual character animation, can
help improving the plausibility of animations and can find a wide
range of applications in human-centered HCI, video gaming and
∗
Corresponding author.
E-mail addresses: ebozkurt@ku.edu.tr (E. Bozkurt), yyemez@ku.edu.tr (Y. Yemez),
eerzin@ku.edu.tr (E. Erzin).
film industries. In this paper, we develop a multimodal system for
speech-driven synthesis and animation of arm gestures using a
statistical framework for joint analysis of speech and gesticulation.
Gesture and speech co-exist in time with a tight synchrony;
they are planned and shaped by the cognitive state and produced
together. In one of the pioneering studies on gesture and speech
relationship, Kendon (1980) proposed a widely accepted hierarchi-
cal model for gesture in terms of phases, phrases and units. In this
model, the core gestural element is defined as gesture phase. Ges-
ture phases can be active or passive. An active gesture phase can
be a stroke (a short and dynamic peak movement) with a retrac-
tion or a preparation (in which arm goes to the start position of
the stroke phase). Passive gesture phases are movements like hold
and rest, in which arm stays motionless. Combinations of phases
constitute gesture phrases, and then combinations of phrases form
gesture units. In this hierarchical model, semantic expressive-
ness increases with the level of hierarchy. In other words, ges-
ture units are semantically more expressive than gesture phrases,
http://dx.doi.org/10.1016/j.specom.2016.10.004
0167-6393/© 2016 Elsevier B.V. All rights reserved.