Breathing Life Into Biomechanical User Models
Aleksi Ikkala Florian Fischer Markus Klar Miroslav Arthur Fleig
Aalto University University of University of
Bachinski
∗
University of
Finland Bayreuth Bayreuth
University of
Bayreuth
Germany Germany
Bayreuth
Germany
Germany
Andrew Howes Perttu Jörg Müller Roderick Antti Oulasvirta
University of
Hämäläinen
University of
Murray-Smith
Aalto University
Birmingham
Aalto University
Bayreuth
University of
Finland
United Kingdom
Finland
Germany
Glasgow
Scotland
Figure 1: We present an approach for generative simulation of interaction with perceptually controlled biomechanical models
interacting with physical devices. The users are modelled with a combination of muscle-actuated biomechanical models and
perception models, and we use deep reinforcement learning to learn control policies by maximizing task-specifc rewards. As
a showcase, we apply a state-of-the-art upper body model to four HCI tasks of increasing difculty: pointing, tracking, choice
reaction, and parking a remote control car via joystick.
ABSTRACT
Forward biomechanical simulation in HCI holds great promise as
a tool for evaluation, design, and engineering of user interfaces.
Although reinforcement learning (RL) has been used to simulate
biomechanics in interaction, prior work has relied on unrealistic
assumptions about the control problem involved, which limits the
∗
Also with University of Bergen.
This work is licensed under a Creative Commons Attribution International
4.0 License.
UIST ’22, October 29-November 2, 2022, Bend, OR, USA
© 2022 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-9320-1/22/10.
https://doi.org/10.1145/3526113.3545689
plausibility of emerging policies. These assumptions include di-
rect torque actuation as opposed to muscle-based control; direct,
privileged access to the external environment, instead of imper-
fect sensory observations; and lack of interaction with physical
input devices. In this paper, we present a new approach for learning
muscle-actuated control policies based on perceptual feedback in
interaction tasks with physical input devices. This allows modelling
of more realistic interaction tasks with cognitively plausible visuo-
motor control. We show that our simulated user model successfully
learns a variety of tasks representing diferent interaction methods,
and that the model exhibits characteristic movement regularities
observed in studies of pointing. We provide an open-source im-
plementation which can be extended with further biomechanical
models, perception models, and interactive environments.