Motor Babble: Morphology-Driven Coordinated Control of
Articulated Characters
Avinash Ranganath
Clemson University
Clemson, SC, USA
arangan@clemson.edu
Avishek Biswas
Clemson University
Clemson, SC, USA
avisheb@clemson.edu
Ioannis Karamouzas
Clemson University
Clemson, SC, USA
ioannis@clemson.edu
Victor B. Zordan
Clemson University
Clemson, SC, USA
vbz@clemson.edu
ABSTRACT
Locomotion in humans and animals is highly coordinated, with
many joints moving together. Learning similar coordinated loco-
motion in articulated virtual characters, in the absence of reference
motion data, is a challenging task due to the high number of degrees
of freedom and the redundancy that comes with it. In this paper,
we present a method for learning locomotion for virtual charac-
ters in a low dimensional latent space which defnes how diferent
joints move together. We introduce a technique called motor babble,
wherein a character interacts with its environment by actuating
its joints through uncoordinated, low-level (motor) excitations, re-
sulting in a corpus of motion data from which a manifold latent
space is extracted. Dimensions of the extracted manifold defne a
wide variety of synergies pertaining to the character and, through
reinforcement learning, we train the character to learn locomotion
in the latent space by selecting a small set of appropriate latent
dimensions, along with learning the corresponding policy.
CCS CONCEPTS
· Computing methodologies → Physical simulation; Learning
latent representations; Reinforcement learning.
KEYWORDS
character animation, physics-based control, reinforcement learning,
animal locomotion
ACM Reference Format:
Avinash Ranganath, Avishek Biswas, Ioannis Karamouzas, and Victor B.
Zordan. 2021. Motor Babble: Morphology-Driven Coordinated Control of
Articulated Characters. In Motion, Interaction and Games (MIG ’21), No-
vember 10ś12, 2021, Virtual Event, Switzerland. ACM, New York, NY, USA,
10 pages. https://doi.org/10.1145/3487983.3488291
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for proft or commercial advantage and that copies bear this notice and the full citation
on the frst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specifc permission and/or a
fee. Request permissions from permissions@acm.org.
MIG ’21, November 10ś12, 2021, Virtual Event, Switzerland
© 2021 Association for Computing Machinery.
ACM ISBN 978-1-4503-9131-3/21/11. . . $15.00
https://doi.org/10.1145/3487983.3488291
Figure 1: Locomotion learned from morphologically specifc
motor babble.
1 INTRODUCTION
Despite recent advances in trajectory optimization and reinforce-
ment learning, it remains challenging to learn motor skills for
physics-based articulated characters. While human motion data has
been used to bootstrap control for humanoid characters, animating
complex non-human characters like those seen in Figure 1 presents
a challenging control problem which can be under specifed and
prohibitively high dimensional. While there is typically an ample
space of control policies to accomplish motor tasks, not all results
lead to natural and coordinated motion. This paper introduces an
approach that attempts to mitigate this problem by extracting coor-
dinated motor activations which are drawn from the character’s
own dynamics directly using a technique we call łmotor babblež
after its inspiration taken from robotics.
State-of-the art deep reinforcement learning (DRL) approaches
excel at generating natural control policies for physically simulated
humanoids, and, more recently, for simple quadrupeds by imitating
motion capture clips of expert behaviors [Park et al. 2019; Peng