Impact of Robot Failures and Feedback on Real-Time Trust Munjal Desai 1 , Poornima Kaniarasu 2 , Mikhail Medvedev 1 , Aaron Steinfeld 2 , and Holly Yanco 1 1 University of Massachusetts Lowell 2 Carnegie Mellon University email: {mdesai, mmedvede, holly}@cs.uml.edu email: {kpoornima, steinfeld}@cmu.edu Abstract—Prior work in human trust of autonomous robots suggests the timing of reliability drops impact trust and control allocation strategies. However, trust is traditionally measured post-run, thereby masking the real-time changes in trust, reduc- ing sensitivity to factors like inertia, and subjecting the measure to biases like the primacy-recency effect. Likewise, little is known on how feedback of robot confidence interacts in real-time with trust and control allocation strategies. An experiment to examine these issues showed trust loss due to early reliability drops is masked in traditional post-run measures, trust demonstrates inertia, and feedback alters allocation strategies independent of trust. The implications of specific findings on development of trust models and robot design are also discussed. I. I NTRODUCTION During an operator’s interaction with an autonomous sys- tem, a key issue is how the operator uses the available auton- omy levels, often referred to as the control allocation strategy [1]. Inappropriate control allocation strategies can result in over-reliance or under-reliance on the automated system [2]. One of the known contributing factors to improper reliance on automation is trust (e.g., [3], [4]). While it is difficult to conclusively state the root cause, over-reliance or under- reliance on automated systems due to miscalibrated trust can often be inferred in incident reports from the aviation industry. For example, while using the flight management system (FMS) to navigate to Cali, Colombia, the crew of American Airlines Flight 965 entered the first few characters for their destination. Accustomed to selecting the first option, the crew selected a destination that happened to be a few miles behind them rather than the intended destination, which was not the top option in this case. The FMS turned the plane around; the plane crashed into a mountain shortly afterwards [5]. For decades, researchers in the human-automation interac- tion field have investigated the control allocation strategies of operators under different circumstances (e.g., [6], [7], [8]) and observed how people use, misuse, or disuse automation (e.g., [2], [9], [10]). Specifically, the influence of several factors including reliability on control allocation have been studied by several researchers; a detailed survey was conducted by Wickens and Xu [11]. Researchers have also investigated fac- tors that influence trust and, ultimately, reliance on automated systems (e.g., [3], [4], [7]) in order to prevent accidents and improve the performance of automated systems. Factors such as self-confidence (e.g., [12], [13], [1]), reliability (e.g., [14], [15], [16]), and risk (e.g., [17], [18]) are known to impact an operator’s trust of the system. Additional factors such as task complexity, workload, system accuracy have also been hypothesized as contributing factors [7]. Similar attempts to understand how robot operators trust and utilize automated behaviors of robots have been made in the field of human- robot interaction (HRI) (for a survey and analysis of recent research see [19]). In our prior work, we examined the impact of changing reliability on an operator’s trust and control allocation strategy [16]. One of the key contributions of that research was finding the impact of the timing of failures of the autonomous behaviors on operator trust and control allocation. However, one of the limitations of that research, due to the experimental methodology utilized, was the inability to examine how trust evolved during a participant’s interaction with a remote robot system and how it was impacted by robot failures at the time of the failure. To investigate the evolution of trust and the impact of varying reliability on real-time trust, we modified the experimental methodology and conducted the research studies described in this paper. While it is important to understand trust and control al- location strategies, it is equally important to find means to influence them, should the need arise. Research exists where participants were provided information about results of past decisions [20]; however, to our knowledge no research exists that investigates the impact of providing information about the automated system’s confidence in its own sensors. Therefore, as part of this research, we also investigated the impact of providing feedback conveying this confidence information on trust and control allocation. Our long term goal is to understand how different factors impact trust and control allocation and, based on this infor- mation, to build a model that can predict an operator’s current level of trust so that the system can adjust in ways to increase the current level of trust to prevent inappropriate usage of the autonomy levels. Towards this end, we created a set of research questions that we needed to address: • Q1: How does the timing of periods of low reliability impact real-time trust? Our prior experiments suggest trust of the robot system is influenced by whether a period of low reliability is at the beginning or end of the run (trust in a robot, as measured using a trust scale after the run is complete, drops if the robot is unreliable near the end of a run [16]). We designed this study to investigate how real-time trust is influenced by the timing of reliability drops.