A Computational Model for Cognitive Human-Robot Interaction: An Approach Based on Theory of Delegation Filippo Cantucci Institute of Cognitive Science and Technology, National Research Council of Italy, (ISTC-CNR) Rome, Italy filippo.cantucci@istc.cnr.it Rino Falcone Institute of Cognitive Science and Technology, National Research Council of Italy, (ISTC-CNR) Rome, Italy rino.falcone@istc.cnr.it Abstract—In this paper we present a cognitive model to support reasoning and decision making on socially adaptive task delegation and adoption. The designed model allows a robot to dynamically modulate to dynamically modulate its own level of collaborative autonomy, by restricting or expanding a received task delegation, on the basis of several context factors as the needs of other users involved in the interaction. We exploit principles underlying theory of delegation, theory of mind and BDI agent modelling, in order to build a decision making system for real- world teaming between autonomous agents. The model has been developed by using JaCaMo framework, which provides support for implementing multi-agent systems and integrates different multi-agent programming dimensions. We tested our model in a specific domain on the humanoid robot Nao, widely adopted in human-robot interaction applica- tions. The support study has established that the model provides the robot with the ability to modify its social autonomy and to handle possible collaborative conflicts due to the initiative to help the user beyond her/his request. I. INTRODUCTION In every-day life, humans cooperate with other humans, in order to gain knowledge, achieve and share goals, following social norms. These are sometimes encoded as laws, some- times as expectations. A primary research topic in cognitive human-robot interaction is the design of autonomous systems that can interact and cooperate proficiently with humans. Indeed, social robots are becoming part of daily life and are present in a variety of environments, including hospitals [1], offices [2], schools [3], tourist facilities [4] and so on. In these contexts, robots have to coexist and collaborate with a wide spectrum of users not necessarily able (or willing) to adapt their interaction level to the kind requested by a machine: the users need to deal with artificial systems whose behavior must be understandable and effective. To be effective, the interaction between humans and robots should consider not only the ability of the robots but also the human preferences [5]. Robots have to maintain as much as possible a natural and intelligent interaction with humans: they should modulate their level of support interpreting both the contextual situations and the needs of the other agents involved in the cooperation [6], just like humans typically do when they interact with each other. The integration of these kinds of social skills in autonomous robots would naturally lead to a deeper relationship of trust between them and humans. Several cognitive architectures have been proposed [7], [8], [9], everyone with the goal of sim- ulating human’s cognitive and behavioral features at different levels of cognition: perception, learning, reasoning, planning, memory and so on. Along with the ability to autonomously elaborate the context information, react to the changes in the environment, make decisions about the task they are expected to carry out by showing some level of proactiveness, robots should integrate the conceptual instruments necessary to transform their autonomy into social autonomy [10]. A. Problem and contribution As claimed in [11], cooperation implies the definition of the two complementary mental attitudes of task delegation and task adoption linking collaborating agents. Delegation and adoption are two basic cognitive ingredients of any collabora- tion and organization. The notion of autonomy in artificial agents, should integrate different levels of task adoption. Indeed, after receiving a task delegated from the outside, artificial agents should exploit their knowledge about the environment, including other agents are interacting with them, to adjust their own decision, for example by going beyond the delegated task, or (partially or completely) changing it, or again, adopting just a sub-part of it, because the context does not allow a complete task achievement. Theory of delegation, should guide the design of the decision making process of every robot that has to collaborate with humans in daily life. In summary, the contribution of this research includes: the development of a declarative, knowledge-oriented, plan-based computational model that relies on the prin- ciples defined in the theory of delegation. The proposed approach provides a robot with an internal representation of itself and the actor involved in the interaction, every one with their own beliefs, goals, plans. In particular, the model is a decision making system where the interaction between the robot and the user is reproduced. Once a user delegates a task to the robot, it can take its decision Workshop "From Objects to Agents" (WOA 2019) 127