XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE
Measurement of Moral Concern for Robots
Tatsuya Nomura
1,4
1
Ryukoku University
Otsu, Shiga 520-2194, Japan
4
ATR Intelligent Robotics and
Communication Laboratories
Keihanna, Kyoto 619-0288, Japan
nomura@rins.ryukoku.ac.jp
Takayuki Kanda
2,4
2
Department of Social Informatics,
Kyoto University
Yoshida-honmachi, Kyoto 606-8501, Japan
kanda@i.kyoto-u.ac.jp
Sachie Yamada
3,4
4
Department of Psychological and
Sociological Studies, Tokai University
Hiratsuka, Kanagawa 259-1292, Japan
s-yamada@tokai-u.jp
Abstract— We developed a self-report measurement, Moral
Concern for Robots Scale (MCRS), which measures whether
people believe that a robot has moral standing, deserves moral
care, and merits protection. The results of an online survey (N =
200) confirmed the concurrent validity and predictive validity of
the scale in the sense that the scale scores are successfully used to
predict people’s intentions for prosocial behaviors.
Keywords— moral concern, self-report scale
I. INTRODUCTION
Morality is one intrinsic human characteristic. People have
an innate motivation to help others even if such
action/decisions decrease their own benefit. Although such
moral cognition is usually applied to human beings, people
sometimes expand it to include such non-human entities as
animals and nature, e.g., extending basic human right to the
great apes [1]. Individual differences exist in moral
expansiveness. A less morally expansive person restricts her
moral concern to those entities she deems “close” (e.g., family),
and a more morally expansive person extends her moral
concern beyond more “distant” entities like animals.
However, opposite situations also occur. Sometimes people
avoid expanding their moral concern to include pets and robots,
and mistreat them (e.g., [2]). Imagine a future scenario where
robots serve various roles in our daily lives. Robot abuse might
be a serious societal problem. In a store, robot clerks might be
abused and fail to maintain the stores; robot workers might be
cheated by their human co-workers and fail to receive
appropriate work efforts from their employees; when a robot
asks a human for help, it might receive scorn or abuse. For
such future scenarios, we expect people to offer a minimal
level of prosocial behaviors, not necessarily a great level of
morality, instead of harm.
We expect diverse moral relationships between individuals
and robots, depending on such factors as personality, robot
appearance and behaviors, and interaction contexts. In some
contexts, we want to elicit more moral concerns to improve a
robot’s treatment. In other contexts, we might want to decrease
our moral concern so that users can easily manipulate robots as
tools without being bothered by their well-being.
Here the fundamental research question is how to measure
moral concern for robots. Our research establishes a self-report
measurement for this concept, i.e., moral concern for robots.
HRI empirical studies commonly use scales (self-report
questionnaires). This paper reports the development of a scale
for the moral concern for a robot called Moral Concern for
Robots Scale (MCRS).
II. SCALE DEVELOPMENT
To collect item pool for MCRS, we adopted nine items
from the interview protocol in Kahn et al. [3] which asks about
moral concern for the disposal/destruction and forced labor of
robots, two items from the Feelings toward Nature Scale [4]
which asks whether people feel negative emotions if nature is
destroyed, and five items from the Thoughtfulness toward
Friends Scale [5] which asks about prosocial behaviors toward
friends. Moreover, we created four items that mention humans’
moral treatment and account for robots based on the language
in the instructions and definitions of the Moral Expansiveness
Scale [6], and eight items based on scenes of possible robot
abuse. Finally, we prepared 28 candidate items for our
prototype MCRS version.
Then, we conducted a questionnaire-based survey with 121
Japanese university students (males: 66; females: 55; mean
age: 20.1 (SD = 1.6)). In the survey, to provide a context for
the answer targets, we first presented a scene where a robot
worked in a city. Then we administrated a questionnaire, i.e., a
prototype version of MCRS that consists of the above 28
questionnaire items. Each item was evaluated by a 7-point
Likert scale (1: strongly disagree, to 7: strongly agree).
We analyzed the collected data by conducting an
exploratory factor analysis using principal component analysis
and Promax rotation. A two-factor structure was decided based
on a scree-plot and item consistency. Two subscales (factors),
consisting of 21 items (first factor: 12 items; second factor: 9
items), were extracted based on factor loadings, the contents of
the items, and the item analysis results in each subscale, which
consisted of I-T correlation coefficients and α-coefficients.
The cumulative contribution ratio of these two factors on the
data was 47.3%, which is enough coverage. The Cronbach’s α-
coefficients for each subscale were .912 and .876, which
indicate good internal consistency.
The first subscale, which is called the basic moral concern,
consists of items that ask whether people have general moral
concerns for robots (e.g., when they should be destroyed or
suffer physical harm) and whether people spend their resources
to provide better welfare for them (e.g., helping them and
The research was supported by JST CREST, Japan.