Toward Task-Based Mental Models of Human-Robot Teaming: A Bayesian Approach Michael A. Goodrich and Daqing Yi Brigham Young University, Provo, UT, 84602, USA mike@cs.byu.edu, daqing.yi@byu.edu Abstract. We consider a set of team-based information tasks, meaning that the team’s goals are to choose behaviors that provide or enhance information avail- able to the team. These information tasks occur across a region of space and must be performed for a period of time. We present a Bayesian model for (a) how infor- mation flows in the world and (b) how information is altered in the world by the location and perceptions of both humans and robots. Building from this model, we specify the requirements for a robot’s computational mental model of the task and the human teammate, including the need to understand where and how the human processes information in the world. The robot can use this mental model to select its behaviors to support the team objective, subject to a set of mission constraints. 1 Introduction In complex, rapidly evolving team settings in which a robot fulfills a role, the robot needs sufficient autonomy to allow its human teammates to be free to direct their atten- tion to a wider range of mission-relevant tasks that may or may not involve the robot. In contrast to many prior applications in which the robot was either teleoperated or man- aged under strictly supervisory control [1], recent advances in robot technologies and autonomy algorithms are making it feasible to consider creating teams in which a robot acts as a teammate rather than a tool [2]. In this team-centered approach, both humans and robots can take on roles that match their strengths. Properly designed, this can facilitate the performance of the entire team. This idea has already been applied to reform human-robot interaction in many areas, like object identification, collaborative tasks performance, etc. [3]. In this paper, we adopt the notion of collaboration, operationally defined as the process of utilizing shared resources (communication, space, time) in the presence of asymmetric goals, asymmet- ric information, and asymmetric abilities as illustrated in Fig. 1. The word collaboration suggests that there are both overlaps and differences between the goals, information, and abilities of the agents involved. Colloquially, collaboration can happen when ev- eryone has something unique to offer and something unique to gain, but there is some benefit to each individual if activity is correlated. In a human-robot team, the asymmetries on abilties and information mostly come from the natural difference on agents’ sensors and actuators. Additionally, an agent may exhibit ability and information asymmetry in different states of interacting with R. Shumaker (Ed.): VAMR/HCII 2013, Part I, LNCS 8021, pp. 267–276, 2013. c Springer-Verlag Berlin Heidelberg 2013