Modelling Gaze Behavior for Conversational Agents Catherine Pelachaud 1 and Massimo Bilvi 2 1 IUT of Montreuil, University of Paris 8, LINC - Paragraphe c.pelachaud@iut.univ-paris8.fr http://www.iut.univ-paris8.fr/ pelachaud 2 Department of Computer and Systems Science, University of Rome “La Sapienza” Abstract. In this paper we propose an eye gaze model for an embodied conversational agent that embeds information on communicative func- tions as well as on statistical information of gaze patterns. This latter information has been derived from the analytic studies of an annotated video-corpus of conversation dyads. We aim at generating different gaze behaviors to stimulate several personalized gaze habits of an embodied conversational agent. 1 Introduction Toward the creation of more friendly user interfaces, embodied conversational agents (ECAs) are receiving a lot of attention. To be more believable these agents should be endowed with the communicative and expressive capacities similar to those exhibited by humans (speech, gestures, facial expressions, eye gaze, etc). In the context of the EU project MagiCster 3 , we aim at building a prototype of conversational communication interface that makes use of non-verbal signals when delivering information, in order to achieve an effective and natural com- munication with humans or artificial agents. To this aim, we create an ECA, Greta, that incorporates communicative conversational aspects. To determine speech-accompanying non-verbal behaviors the system relies on a taxonomy of communicative functions proposed by [14]. A communicative function is defined as a pair (meaning, signal) where meaning corresponds to the communicative value the agent wants to communicate and signal to the behavior used to con- vey this meaning. To control the agent we are using a representation language, called ‘Affective Presentation Markup Language’ (APML) where the tags of this language are the communicative functions [13]. Our system takes as input the text (tagged with APML) the agent has to say. The system instantiates the com- municative functions into the appropriate signals. The output of the system is 3 IST project IST-1999-29078, partners: University of Edinburgh, Division of Infor- matics; DFKI, Intelligent User Interfaces Department; Swedish Institute of Com- puter Science; University of Bari, Dipartimento di Informatica; University of Rome, Dipartimento di Informatica e Sistemistica; AvartarME