Critic, Compatriot, or Chump?: Responses to Robot Blame Attribution Victoria Groom, Jimmy Chen, Theresa Johnson, F. Arda Kara, Clifford Nass Department of Communication Stanford University Stanford, CA vgroom@stanford.edu Abstract—As their abilities improve, robots will be placed in roles of greater responsibility and specialization. In these contexts, robots may attribute blame to humans in order to identify problems and help humans make sense of complex information. In a between-participants experiment with a single factor (blame target) and three levels (human blame vs. team blame vs. self blame) participants interacted with a robot in a learning context, teaching it their personal preferences. The robot performed poorly, then attributed blame to either the human, the team, or itself. Participants demonstrated a powerful and consistent negative response to the human-blaming robot. Participants preferred the self-blaming robot over both the human and team blame robots. Implications for theory and design are discussed. Keywords – human-robot interaction; blame attribution; politeness; face-threatening acts I. INTRODUCTION As robots become more sophisticated and are deployed more widely, their ability to communicate effectively and politely will become increasingly important. Whether or not robots should be treated as teammates [1], robots will assume specialized roles, sometimes acquiring and leveraging information unknown to human partners. With this increased expertise, robots may be put in roles more equal to humans, and, in some cases, will be in superior positions to make judgments. Many current technologies violate basic rules of politeness. Computers and interfaces alert users to inadequacies or failures with error messages that deflect responsibility, sometimes implicating the user as the source of the problem. Such deflections elicit frustration and anger from users, negatively affecting users’ attitudes toward the technology. As robots are increasingly expected to demonstrate high levels of expertise, the need for robots to present information that humans may not want to hear is growing. As demonstrated by current technologies, failure to present this information politely can generate significant negative consequences. This study examines one type of difficult conversation that robots may soon engage in: attributing blame to humans. Attributing blame can be useful, as it aids understanding and helps identify problems. If robots are to succeed in blaming humans for failures, however, they must do so in a way that does not humiliate and aggravate human partners. This study evaluates the effects of robots attributing blame to different targets, varying whether the robot attributes blame to a human partner, a human-robot team, or itself. Results from this study provide guidelines for designing robots that can issue blame while still maintaining a positive human-robot relationship. II. RELATED WORK A. Robot Social Actors The Computers as Social Actors (CASA) paradigm [2][3] suggests that people respond to technologies as social actors, applying the same social rules used during human-human interaction. The original studies that established CASA featured simple desktop computers, but subsequent studies have revealed that people also apply social rules when interacting with voice-based interfaces and pictorial-agents [4]. More recent field and experimental research has demonstrated that people treat robots as social actors, establishing social rapport with robots [5]. For example, as with humans, the competitive or collaborative dynamic of human-robot relationships affects attitudes towards robots [6]. It is not surprising that robots elicit social responses. Robots’ social cues may be even more powerful than the cues of computers and characters, due in large part to the embodied nature of robots. Bodies facilitate nonverbal communication, with proxemics and gesturing affecting humans’ responses [7][8][9]. 1) Face-threatening acts: With the increased knowledge and status of robots, they will inevitably need to have difficult converations with humans involving disagreement or assigning blame. These assertions are face-threatening acts, in which humans are likely to feel threatened, upset, or humiliated [10][11]. Because humans apply the same social rules to robots as they do to humans, robots may be able to mitigate the damage of these face-threatening acts on the human-robot relationship by leveraging strategies used by humans to navigate difficult discussions. Though researchers are only beginning to explore this topic, recent findings indicate that, as with human-human conflict, when robots employ politeness strategies, humans’ negative responses to disagreement are reduced [12] and perceptions of robots are improved [13]. This study extends recent work on difficult human-robot conversations, this time exploring the influence of robot blame attribution on human attitudes. B. Blame Attribution As people perceive the world around them, they actively infer the causes of events [14]. The manner in which people make these causal inferences, or “attributions,” affects peoples’