Generic natural language command interpretation in ontology-based dialogue systems Laurent Mazuel and Nicolas Sabouret Laboratoire d’informatique de Paris 6 - LIP6, France {laurent.mazuel, nicolas.sabouret}@lip6.fr Abstract This paper presents a general architecture towards a more generic approach to conversational agents. Our ar- chitecture contains generic (in sense of application inde- pendent) natural language (NL) modules that are based on ontologies for command interpretation. We focus on the presentation of the event generator and dialogue manager modules which rely on a bottom-up approach for matching the user’s command with the set of currently possible ac- tions. 1 Introduction Recent works on Embodied Conversational Agents (ECA) [7] and more generally conversational systems [2] showed that natural language (NL) interaction is one crucial step in the course toward a more natural human-agent in- teraction. However, the chosen approaches in ECA mostly rely on ad-hoc pattern matching without semantic analysis [1]. The dialogue system community, on the other hand, proposes to use ontologies to improve genericity [8, 11]. The main idea behind the use of ontologies is to specify generic algorithm that only depends on the ontology formal- ism. Thus, applications only depend on the ontology and the specific application problem-solver. Systems like [8, 12] use the ontology to parameterize a generic parser. However, in such systems, the ontology formalism itself is ad-hoc. It strongly depends on the application type and does not al- low generic knowledge representation. Moreover, these on- tologies describe the application model as well as the ap- plication actions. Our claim is that it should be possible to extract the meaning of actions from the code itself. The on- tology then is no longer an application descriptor. It only provides the complementary semantic information on re- lations between the application concepts (which is the ini- tial role of ontologies). Moreover, systems that use generic knowledge representation (e.g. [11]) rely on application- dependant parsers. However, the parser use the structure of the ontology to understand over-specified or under-specified commands like “switch the light on” (the system will pro- pose the different possible locations to enlighten). This paper focuses on command interpretation for intel- ligent agents. We propose a generic NL system based upon a domain ontology and agents capable of introspection. The system extracts the set of possible actions from the agent’s code and matches these actions with the user’s command using the ontology as a glue. In addition, a score-based dialogue manager (like [9]) deals with misunderstood or in- definite commands. Our paper is organised as follows. In the second sec- tion, we give a general overview of our agent model. The third section presents the Natural Language Processing al- gorithm we use. We first present the parser. We then detail our algorithm for command interpretation. We also present our dialogue manager that deals with clarification. Section 4 concludes the paper. 2 Overview of our model Our aim is to be able to programme cognitive agents that can be controlled by natural language commands, and that are capable of reasoning about their own actions, so as to answer questions about their behaviour and their activity. To this purpose, we use a specific language that allows us to access at runtime to the description of the agent’s internal state and actions. 2.1 The VDL model Our agents are programmed using the View Design Lan- guage (VDL) language 1 . The VDL model is based on XML tree rewriting: the agent’s description is an XML tree whose nodes represent either data or actions. The agent rewrites the tree at every execution step according to these specific elements. This model allows agents to access at runtime to the description of their actions and to reason about it for 1 http://www-poleia.lip6.fr/~sabouret/demos 1