Adding Speech to a Robotics Simulator Graham Wilcock and Kristiina Jokinen Abstract We present a demo showing different levels of emergent verbal behaviour that arise when speech is added to a robotics simulator. After showing examples of (silent) robot activities in the simulator, adding speech output enables the robot to give spoken explanations of its behaviour. Adding speech input allows the robot’s movements to be guided by voice commands. In addition, the robot can modify its own verbal behaviour when asked to talk less or more. The robotics toolkit supports different behavioural paradigms, including finite state machines. The demo shows an example state transition based spoken dialogue system implemented within the robotics framework. Other more experimental combinations of speech and robot behaviours will also be shown. 1 Introduction Human-robot interaction is an area where recently much work has been focussed. There are possibilties not only to demonstrate integrated technological platforms for various input and output modalities, but also the rich interaction capabilities that spoken dialogues offer as a means of interfacing between humans and computers. In this paper we focus on human-robot interaction related to communication on the level of providing feedback on one’s actions. In particular the robot needs to give explanations about where it is going and what it is doing. This kind of interaction is important in the context of ”socially interactive robots” [2]. Robots of this type need to provide a natural interface for interacting with users. They need to adopt Graham Wilcock University of Helsinki, Finland e-mail: graham.wilcock@helsinki.fi Kristiina Jokinen University of Helsinki, Finland e-mail: kristiina.jokinen@helsinki.fi 1