Abstract— Research in HRI indicates that people follow a robot’s instructions even when they are incorrect. However, when a robot’s instructions or requests contradict those of a human (e.g. an authoritative experimenter), people obey the human instead. This might be due to the experimenter’s perceived ingroup status, or to their higher presumed authority compared to the robot. This study manipulated experimenter authority (high, low) and robot group membership (ingroup, neutral) to test how they affected responses to conflicting orders from the two agents depending on the request's importance (big, small). While there was no main effect of group membership and authority on most participant behavior, when experimenter authority was low and the robot an ingroup member, participants defied the experimenter’s instructions to turn off an ingroup robot at the end of the experiment, following the robot’s instructions instead. Further, request importance affected participant behavior. Participants typically followed the robot’s low-importance requests (e.g., moving from one chair to another), but not high-importance requests (e.g., how to perform a simulated task of diagnosing and talking to patients). I. INTRODUCTION With the application of intelligent and interactive technologies in increasingly diverse spheres of everyday life, autonomous robots will more often make suggestions that directly affect people’s lives, (e.g., assisting with medical diagnosis such as in IBM’s Watson Health project). Inevitably, situations will arise in which robots’ and humans’ suggestions contradict, putting people in the position of deciding which to follow. A prior lab study showed that participants followed the researcher’s requests over a robot’s when they contradicted [1]. However, studies have not yet addressed how the social circumstances of human-robot interaction (HRI) affect whether people follow a robot’s instructions over a human's. In this paper, we focus on the experimenter’s perceived authority and robot group membership as social factors that might affect whose suggestions participants choose to follow. Robots’ perceived group membership affects participant attitudes and behavior toward them. As in human-human interaction, people react more positively to ingroup than *Research supported by the Computing Research Association – Women (CRA-W) through the Collaborative Research Experience for Undergraduates (CREU) program. C. Sembroski is an undergraduate in the Informatics and Computer Science Department, Indiana University, IN 47408 USA (phone: 1-317- 366-5630; email: csembros@indiana.edu) M. R. Fraune is a PhD candidate in the Cognitive Science Program, Indiana University, IN 47408 USA (email: mfraune@indiana.edu). S. Šabanović is an Associate Professor in the Informatics and Computer Science Department, Indiana University, IN 47408 USA (email: selmas@indiana.edu). outgroup robots, such as by evaluating ingroup members (e.g., robots that shared nationality with participants) as more anthropomorphic (i.e., humanlike, warm) than outgroup members [2]. In human interactions, people more readily follow advice from ingroup than from outgroup members [3, 4]. Therefore, people may more readily take advice from ingroup than neutral or outgroup robots. While prior HRI studies compare participant reactions to ingroup robots and to outgroup robots, none compare reactions to ingroup and neutral robots. This is problematic because studies often introduce robots neutrally, allowing participants to perceive them as ingroup, outgroup, or neutral parties on their own. This study examines the extent to which participants follow a robot’s suggestions or requests when they conflict with those of a human with high authority: the researcher. These conflicts begin with small requests (i.e., in which chair to sit) and end with more important instructions (i.e., the researcher tells participants to turn off the robot, to which the robot pleads to be kept on). This study examines how a researcher’s level of authority and a robot’s group membership affect participant obedience toward and perceptions of the robot versus the researcher. II. BACKGROUND A. Authority and Robots Literature indicates that participants often follow experimenter instructions even when those instructions are distressing or morally questionable. For example, in the Milgram shock experiment, two thirds of participants followed an experimenter's instructions to apply increasingly intense shocks to another person, despite their complaints and eventual lapse into non-responsiveness [5]. However, participant obedience dropped significantly when the experimenter had less legitimate authority: when an experimenter wearing a white lab coat was replaced by an “ordinary member of the public” in everyday clothes, participant obedience dropped from 65% to 20% [6]. HRI research suggests that people similarly over-trust a robot they perceive as knowledgeable, and may follow it into danger. In one study, all participants who interacted with an “Emergency Guide Robot” chose to follow the robot as it led them toward an exit in an emergency setting, even if the robot brought them to a room with no exit, and even if they had recently witnessed the robot’s poor performance in a navigation guidance task [7]. Previous research also shows that when an experimenter’s and a robot’s instructions contradict, participants typically follow the experimenter's instruction [8]. The robot’s characteristics can, however, influence behavior. In Bartneck’s study, participants who were asked to turn off a He Said, She Said, It Said: Effects of robot group membership and human authority on people's willingness to follow their instructions Catherine E. Sembroski, Marlena R Fraune, Member, IEEE, and Selma Šabanović, Member, IEEE