Social Gaze Behavior for Face to Face Human-Robot Interaction Hagen Lehmann 1 , Frank Broz 2 , Reza Ahmadzadeh 3 , Alessio Del Bue 1 , Lorenzo Natale 1 and Giorgio Metta 1 Abstract— This short paper discusses the importance of human-like gaze behaviors for humanoid robots with physical eyes. It gives a brief overview of the functions gaze fullfills in human-human interactions from a social evolutionary perspec- tive. In the second part we describe how human-like gaze has been implemented in robot in the research field of social robotics in the last years. The last part of the paper briefly introduces an architecture for a conversational gaze controller (CGC). The parameters of this gaze controller are based on the analysis of a large corpus of gaze tracking data collected during human- human conversations. We describe our experiemental approach for obtaining this data and discuss the implications of endowing robots with humanlike behaviors and enabling them to engage in face to face social interactions. I. I NTRODUCTION The eyes are the most important nonverbal communication channel humans use in face to face interactions. The role of the human eye in communication has a long evolutionary history in human phylogentic development, and has most likely shaped its physiological struture. No other primate species has visible as pupils like as humans [9], [10]. It has been argued that the high visibility of the human sclera has evolved as an additional source of information about the intention of social interaction partners within a group [19]. On one hand communicating with glances instead of pointing gestures can be very advantages in situation in which stealth is needed, on the toher hand the increased visibility of the sclera made involuntary and subconscious eye movements visible to other members of human groups. The ability to recognize and correctly interpret these movements and the relevant emotional states [1] facilitated the human capacity to live in large and complex social groups and to understand each other as intentional agents. This enabled us to interact empathically with one another. Without the appropriate gaze information we feel uncomfortable during social interactions with others - being starred at makes us feel uncomfortable very quickly, not being looked in the eyes while being spoken to makes us feel nervous. During conversations humans jointly regulate their eye contact. This is called mutual gaze and has a variety of social functions. It helps to regulate turn taking in conversations [8], transmits social dominance [6], 1 Hagen Lehmann, Alessio Del Bue, Lorenzo Natale and Giorgio Metta are with the iCub Facility, Istituto Italiano di Tecnologia, Via Morego 30, 16163 Genoa, Italia hagen.lehmann@iit.it, alessio.delbue@iit.it, lorenzo.natale@iit.it, giorgio.metta@iit.it 2 Frank Broz is with the Department of Mathmatical and Com- puter Science, Heriot Watt University, Edinburgh, EH14 4AS, UK f.broz@hw.ac.uk 2 Reza Ahmadzadeh is with the Institute for Robotics and Intelli- gent Machines, College of Computing, Georgia Institute of Technology reza.ahmadzadeh@gatech.edu and is an expression of the interaction of different personality traits of the persons involved in a conversation [8]. The ability to interpret, follow and use the gaze of the others can be found in humans at a very young age. It has been shown in many studies that human infants can follow eye gaze to discover a hidden reward from the age of 12 - 18 months (e.g. [5]). Robots are increasingly moving into the public domain. This will require them to be able to communicate intuitively and naturalistic with their human users. One of the main intended functions of this new generation of service robots will be social interaction. Since humans rely heavily both on head and on eye gaze information from their social partners during social interactions, we propose that is necessary for an intuitive and comfortable face to face interaction with a robot to take the social functions of the human eye into consideration. We will briefly discuss how eye gaze is used in human-robot interaction at the moment and describe our approach to implement gaze and our conversational gaze controller (CGC) in more detail. In the final part of the paper we will discuss the implications of the use of a naturalistic CGC, as well as of our future plans of integrating our CGC into a more holistic and reactive social interaction architecture. II. HUMAN-LIKE GAZE IN SOCIAL ROBOTICS The importance of incorporating gaze into robotic be- havior has been recognized in the research field of social robotics early on. Different attempts have been made to simulate humanlike gaze (e.g. [3], [14]). There are different obstacles that need to be overcome in order to succeed in this endeavor. The existence of human-like features in a robot creates an expectation about their movements in the user that can, if not fulfilled, make the robot awkward and uncomfortable to interact with [12]. For parts that are crucial for naturalistic human robot communication, like robot eyes, this is specifically true. There are different variables that have to be taken into consideration in order to achieve natural looking robotic gaze. The moving speed of the pupils is as important as the frequency in which the robot is switching from one focal point to the next. In case of face to face interaction these focal points need to be the different facial features of the interlocutor. A question that needs to be answered is whether it is necessary to simulate saccadic movements to achieve a naturalistic appearance of the robotic eye movement. It is also important to explore how important the exact simulation of the vergence is for direct face to face interaction. The movement of the head during and in between the gaze changes is another variable that needs to be taken 9th International Workshop on Human Human Friendly Robotics, 29. & 30. September 2016, Genoa, Italy 112 HFR2016.wordpress.com