Which Robot Am I Thinking About?
The Impact of Action and Appearance on People’s Evaluations of a Moral Robot
Bertram F. Malle Matthias Scheutz
Dept. of Cognitive, Linguistic, Department of
and Psychological Sciences Computer Science
Brown University Tufts University
Providence, RI 02906 Medford, MA 02155
Email: bfmalle@brown.edu
Abstract—In three studies we found further evidence for a
previously discovered Human-Robot (HR) asymmetry in moral
judgments: that people blame robots more for inaction than
action in a moral dilemma but blame humans more for action
than inaction in the identical dilemma (where inaction allows
four persons to die and action sacrifices one to save the four).
Importantly, we found that people’s representation of the
“robot” making these moral decisions appears to be one of a
mechanical robot. For when we manipulated the pictorial
display of a verbally described robot, people showed the HR
asymmetry only when making judgments about a mechanical-
looking robot, not a humanoid robot. This is the first
demonstration that robot appearance affects people’s moral
judgments about robots.
Keywords robot ethics; machine morality; human-
robot interaction; moral psychology; anthropomorphism.
I. INTRODUCTION
In recent years, discussions about the prospects and dangers
of intelligent machines have intensified, especially about
machines that might make autonomous life-and-death decisions
in military, medical, or search-and-rescue contexts. Robots, in
particular, have started to appear in various societal domains with
moral significance, from care for the elderly to education and
security. Some argue that we should refrain from building and
deploying any machines that could harm humans [1]; others argue
that stopping the deployment of increasingly autonomous robots
is not realistic, and we therefore need to equip robots with moral
competence to avoid unnecessary harm to humans [2], [3].
Arguments on either side of the debate have offered
philosophical, legal, and computational perspectives [4]–[6], but
little empirical research has examined ordinary people’s
perceptions of intelligent machines in these contexts—
perceptions that will determine which robots will be accepted in
which societal domains. Thus, we examined what people expect
and demand of robots that make significant moral decisions,
including ones involving life and death.
Empirical research methods from the cognitive and behavioral
sciences provide one set of tools to help answer this question.
This particular domain of inquiry poses challenges, however,
because we do not know the exact properties of near-future robots
that might make life-and-death decisions. We must therefore
create a series of potential scenarios and probe people’s responses
to these scenarios. Moreover, for such weighty decisions, live
experiments are not feasible (as they are for more minor moral
issues such as cheating, [7]), so we must rely on well-crafted
Jodi Forlizzi John Voiklis
HCI Institute and Dept. of Cognitive, Linguistic,
School of Design and Psychological Sciences
Carnegie Mellon Univ. Brown University
Pittsburgh, PA 15213 Providence, RI 02906
simulation experiments to investigate people’s moral responses.
Finally, people’s responses to autonomous robots will change
over time, as science, industry, and media alter the reality of
robots in society and influence collective perceptions of this
reality. Cognitive and behavioral research can track such
longitudinal change and identify at least some of its determinants.
A second set of tools to answer the question of what people
demand of robots in moral decision situations comes from the
discipline of design [8], [9]. When building future robots, many
subtle design decisions must be made that have significant impact
on robot functionality and, equally important, on human
perceptions of their functionality. Such perceptions not only
involve user comfort and acceptability but potential activation of
fundamental human responses when interacting with the robot—
such as ascriptions of agency, intentionality, mind, and moral
capacity. In this paper we bring together the tools of cognitive
research and design inquiry to elucidate how, and under what
conditions, people judge artificial agents as morally blameworthy.
In particular, we examine whether robots are evaluated differently
from humans in moral situations and whether the robot’s
mechanical or humanoid appearance matters.
II. BACKGROUND
A. Judging Robots in Moral Dilemmas
Because decisions about life and death seem to be among the
primary concerns people have about robots today, recent research
began to investigate human perceptions of robots in moral
dilemmas [10], which can easily be designed to involve
conflictual life-and-death decisions [11]. Such dilemmas typically
involve a conflict between obeying a prosocial obligation (e.g.,
saving people who are in danger) and obeying a prohibition
against harm (e.g., killing a person in the attempt of saving those
in danger). These studies have demonstrated that most people
show no reluctance in making moral judgments about a robot’s
decision in such a dilemma and that generally people’s judgments
of robots (and justifications for those judgments) are highly
similar to their judgments of humans [10]. To date, this is the
strongest evidence for the claim that people apply the same
psychological mechanisms for thinking about and evaluating
robot actions as they do for thinking about and evaluating human
actions (see also [12]–[14]). At the same time, an asymmetry has
emerged in how people perceive humans’ and robots’ decisions in
moral dilemmas: People consider a human agent’s intervention
(i.e., sacrificing one life while saving four lives) more
blameworthy than a nonintervention, but they consider a robot
agent’s nonintervention more blameworthy than an intervention
[10] (henceforth we call this the moral HR asymmetry).
978-1-4673-8370-7/16/$31.00 © 2016 IEEE 125