Autonomous Robots
https://doi.org/10.1007/s10514-018-9699-4
One-shot learning of human–robot handovers with triadic interaction
meshes
David Vogt
1
· Simon Stepputtis
2
· Bernhard Jung
1
· Heni Ben Amor
2
Received: 31 December 2016 / Accepted: 13 January 2018
© Springer Science+Business Media, LLC, part of Springer Nature 2018
Abstract
We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human
users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial
relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that
enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose
a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The
feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead
to more natural and intuitive interactions with the robot.
Keywords Human–human demonstration · Human–robot interaction · Handover · Interaction mesh
1 Introduction
Handing over an object to another person is arguably one of
the most essential physical interaction skills. Independently
of whether we are at home, in the workplace, at a restaurant,
or at the hospital, we are often faced with situations in which
we either receive an object, or handover an object to another
person. Hence, for robots to be reliably used as assistants
to humans, they have to be able to engage in similar inter-
actions and deal with the large variability inherent to such
This is one of the several papers published in Autonomous Robots
comprising the Special Issue on Learning for Human–Robot
Collaboration.
B David Vogt
contact@david-vogt.com
Simon Stepputtis
sstepput@asu.edu
Bernhard Jung
bernhard.jung@informatik.tu-freiberg.de
Heni Ben Amor
hbenamor@asu.edu
1
Faculty of Mathematics and Informatics, Technical University
Bergakademie Freiberg, 09599 Freiberg, Germany
2
School of Computing, Informatics, Decision Systems
Engineering, Arizona State University, 699 S Mill Ave,
Tempe, AZ 85281, USA
handover tasks. Hand-overs are joint tasks in which the giver
and receiver coordinate their movements in order to ensure
the successful transition of the object from one to the other
(see Fig. 1). This requires the interaction partners to react and
adapt to each others’ movement, timing, style, and posture.
With the advent of collaborative robots, research on
human–robot handovers has found increased interest in the
robotics community. Various strategies for specifying and
learning such behavior have been put forward, such as
in Duvallet et al. (2016); Ewerton et al. (2015). While
these approaches have produced important insights, they
mostly model human–robot handover as a dyadic interaction
process—the process parameters are solely influenced by the
two interaction partners and not the handled object. However,
especially in situations in which an object is handed from a
human to a robot, it is important to incorporate the object as
an additional element in the interaction process. In addition,
the majority of approaches focuses on the spatial relationship
of the end-effectors during the task. Only the position of the
human hand is used to identify the robot’s response.
In this paper, we propose a methodology for learning
triadic interaction meshes from observed human–human
demonstrations. In particular, we focus on scenarios in which
the robot receives an object from a human partner. Given
a single demonstration, we can extract information about
the synchrony in movement between different body parts
of the two interactants, spatial relationships between inter-
123