Geting to know Pepper
Efects of people’s awareness of a robot’s capabilities on their trust in the robot
Alessandra Rossi
Adaptive Systems Research Group,
University of Hertfordshire
Hatfeld, UK
a.rossi@herts.ac.uk
Patrick Holthaus
Adaptive Systems Research Group,
University of Hertfordshire
Hatfeld, UK
p.holthaus@herts.ac.uk
Kerstin Dautenhahn
Departments of Electrical and
Computer Engineering/Systems,
University of Waterloo
Waterloo, Ontario, Canada
Adaptive Systems Research Group,
University of Hertfordshire
Hatfeld, UK
kerstin.dautenhahn@uwaterloo.ca
Kheng Lee Koay
Adaptive Systems Research Group,
University of Hertfordshire
Hatfeld, UK
k.l.koay@herts.ac.uk
Michael L. Walters
Adaptive Systems Research Group,
University of Hertfordshire
Hatfeld, UK
m.l.walters@herts.ac.uk
ABSTRACT
This work investigates how human awareness about a social robot’s
capabilities is related to trusting this robot to handle diferent tasks.
We present a user study that relates knowledge on diferent quality
levels to participant’s ratings of trust. Secondary school pupils were
asked to rate their trust in the robot after three types of exposures:
a video demonstration, a live interaction, and a programming task.
The study revealed that the pupils’ trust is positively afected across
diferent domains after each session, indicating that human users
trust a robot more the more awareness about the robot they have.
CCS CONCEPTS
· Computer systems organization → Robotics; · Computing
methodologies → Cognitive robotics; · Human-centered com-
puting → User studies;
KEYWORDS
Human-Robot Interaction, Trust in robots, HRI awareness, Social
robotics, UK robotics week
ACM Reference Format:
Alessandra Rossi, Patrick Holthaus, Kerstin Dautenhahn, Kheng Lee Koay,
and Michael L. Walters. 2018. Getting to know Pepper: Efects of people’s
awareness of a robot’s capabilities on their trust in the robot. In 6th Interna-
tional Conference on Human-Agent Interaction (HAI ’18), December 15ś18,
2018, Southampton, United Kingdom. ACM, New York, NY, USA, 7 pages.
https://doi.org/10.1145/3284432.3284464
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for proft or commercial advantage and that copies bear this notice and the full citation
on the frst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specifc permission and/or a
fee. Request permissions from permissions@acm.org.
HAI ’18, December 15ś18, 2018, Southampton, United Kingdom
© 2018 Association for Computing Machinery.
ACM ISBN 978-1-4503-5953-5/18/12. . . $15.00
https://doi.org/10.1145/3284432.3284464
1 INTRODUCTION
Trust is widely assumed to be one of the key factors in human users’
acceptance of social robots in human-centred environments [24].
However, a human user’s awareness of the robot’s skills also has
signifcant efects on the interaction quality [1]. This work hence
investigates how human trust in a social robot is afected by their
interaction history and the human’s knowledge about the robot’s
capabilities and limitations.
Trust between humans is constructed from a perception of ability,
benevolence and integrity [25]. In Human Computer Interaction,
Muir and Moray [26] showed that people’s trust in a machine was
strongly afected by the machine’s good performance. Indeed, trust
is a key factor in the acceptance of an autonomous robot as a
peer, assistant or companion in human-centred environments. It
can determine humans’ perception of the usefulness of imparted
information and capabilities of a robot [18, 31, 32].
Human awareness of a social robot’s skills can be gained through
robot appearance and behaviours including their common interac-
tion history [19]. Typically, people naive to social robots in terms
of real world encounters, already have certain expectations on
their functionalities based on fctional movies and stories. In real-
ity though, there is a signifcant gap between the current state of
robotics research and science fction [20], and sometimes even ad-
vertisements for real robots that make use of artifcial intelligence
1
.
As a consequence, negative efects on the interaction quality have
to be considered when violating the user’s expectations on the
robot [23].
Within this paper, we investigate the relationship between hu-
man users’ expectations of the robot and the quality of a Human-
Robot Interaction (HRI). Particularly, we analyse the impact of
repeated interactions that reveal diferent aspects of the robot’s
capabilities step-by-step on the users’ trust ratings of the robot.
With this approach, we gain insights on how human awareness of
the robot afects their trust in it.
1
https://www.youtube.com/watch?v=SSecbMFQK1I