Robotics and Autonomous Systems 58 (2010) 322–332
Contents lists available at ScienceDirect
Robotics and Autonomous Systems
journal homepage: www.elsevier.com/locate/robot
Affective social robots
✩
Rachel Kirby
a,∗
, Jodi Forlizzi
b
, Reid Simmons
a
a
Carnegie Mellon University, Robotics Institute, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA
b
Carnegie Mellon University, Human–Computer Interaction Institute and Design Department, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA
article info
Article history:
Received 25 July 2007
Received in revised form
4 September 2009
Accepted 21 September 2009
Available online 30 September 2009
Keywords:
Human–robot interaction
Social robots
Emotions
Moods
Affective modeling
abstract
For human–robot interaction to proceed in a smooth, natural manner, robots must adhere to human social
norms. One such human convention is the use of expressive moods and emotions as an integral part of
social interaction. Such expressions are used to convey messages such as ‘‘I’m happy to see you’’ or ‘‘I want
to be comforted,’’ and people’s long-term relationships depend heavily on shared emotional experiences.
Thus, we have developed an affective model for social robots. This generative model attempts to create
natural, human-like affect and includes distinctions between immediate emotional responses, the overall
mood of the robot, and long-term attitudes toward each visitor to the robot, with a focus on developing
long-term human–robot relationships. This paper presents the general affect model as well as particular
details of our implementation of the model on one robot, the Roboceptionist. In addition, we present
findings from two studies that demonstrate the model’s potential.
© 2009 Elsevier B.V. All rights reserved.
1. Introduction
Social robots, such as those that operate in healthcare
institutions and in museums, need to communicate with people
in ways that are natural and easily understood, even by non-
roboticists. We believe that one way to improve these interactions
is to have robots display changing moods and emotions, just as
humans do. This paper describes a generative model of affect that
attempts to strongly mimic how people emote in order to produce
as natural-seeming a system as possible. The model is designed
particularly for robots that interact with people over long periods
of time. As such, our focus is on modeling the long-term aspects
of, and interactions between, emotions, moods, and attitudes. We
have implemented our affective model on the Roboceptionist, a
robot that interacts with people on a daily basis [1]. In addition, we
have run several experiments to demonstrate the model’s use in
social situations, which show that people do recognize emotional
expressions on the robot’s face (Fig. 1) and that such expressions
can significantly influence how people interact with the robot.
1.1. Human interaction
Affect, such as mood and emotion, plays a major role in
human interaction. Quite often,emotional reactions are caused
✩
This work was funded in part by an NSF Graduate Research Fellowship to the
first author and by NSF grants #IIS-0329014 and #IIS-0121426. Portions of this work
have appeared previously in Gockley et al. (2006) [34,37].
∗
Corresponding author. Fax: +1 412 268 5571.
E-mail addresses: rachelg@cs.cmu.edu (R. Kirby), forlizzi@cs.cmu.edu
(J. Forlizzi), simmons@cs.cmu.edu (R. Simmons).
by social interactions, influenced by societal and cultural norms,
or used to communicate desires to other people [2]. Emotions
carry conversational content, allowing conversational partners to
form common ground and communicate more effectively [3]. For
instance, an expression of sadness—facial, vocal, or behavioral—
may indicate a desire to be comforted. Furthermore, what mood
a person is in has a strong impact on how that person interacts
with others [4]; for example, people who are interacting may
‘‘catch’’ each other’s moods and emotions, unconsciously matching
their own emotional states to their conversational partners’ [5].
Frijda argues that the primary reason for social interaction is,
in fact, to experience emotions, which serve to form a ‘‘sense
of coherence with others’’ [6]. Suppression of emotions can be
highly detrimental to relationship forming and is disruptive to
conversations [7].
A well-studied effect of human–computer interaction is that
people tend to treat computer agents in the same way that they
treat other people, forming social relationships with them [8].
We believe that this tendency to form social relationships with
computers will also apply to robots, perhaps even more so. If that is
the case, then people will respond to a robot’s emotions as though
the robot were human, and will expect the robot’s emotional
responses to be consistent across multiple interactions.
1.2. Human–robot social interaction
In recent years, the robotics community has seen a gradual
increase in social robots, that is, robots that exist primarily to
interact with people. Museum tour-guide robots [9] and robots
that interact with the elderly [10] demonstrate not only the
0921-8890/$ – see front matter © 2009 Elsevier B.V. All rights reserved.
doi:10.1016/j.robot.2009.09.015