Identifying Task Engagement: Towards Personalised Interactions with Educational Robots Lee J. Corrigan School of Electronic, Electrical and Computer Engineering, University Of Birmingham, United Kingdom ljc228@bham.ac.uk Christopher Peters School of Computer Science and Communication, Royal Institute of Technology (KTH), Sweden chpeters@kth.se Ginevra Castellano School of Electronic, Electrical and Computer Engineering, University Of Birmingham, United Kingdom g.castellano@bham.ac.uk Abstract—The focus of this project is to design, develop and evaluate a new computational model for automatically detecting change in task engagement. This work will be applied to robotic tutors to enhance and support the learning experience, enabling timely pedagogical and empathic intervention. This work is intended to forward the current state of the art by 1) exploring how to automatically detect engagement with a learning task, 2) designing and developing new approaches to machine learning for adaptive platform-independent modelling and 3) evaluation of its effectiveness for building and maintaining learner engagement across different tutor embodiments, for example a physical and virtual embodiment. Index Terms—engagement; human-robot interaction; robotic tutors; social robots; I. I NTRODUCTION The term engagement is often used in human-robot interac- tion (HRI) to explain the connection between the human and the robot during an interaction [1] [2] [3] [4]. However, in this work the focus is to design, develop and evaluate a new com- putational model for automatically detecting and maintaining engagement with a learning task. Here, engagement with the learning task is considered to be characterised by elements of attention, concentration and enjoyment [5]. Imagine the typical classroom scenario where twenty-five or more children are being taught by a single teacher, where each child is only benefiting from a fraction of the teachers’ time and assistance. Now, again imagine the same scenario, but this time each of the children is learning through an interactive touch-screen table (Figure 1), designed to be adaptive and supportive of the child throughout the learning experience. Attached to the table is a humanoid robotic tutor capable of both emotional and pedagogical intervention, applying teach- ing styles and strategies which are suitable and personalised to each child. This could be the future of education... More one-to-one interactions in the classroom and learning experiences which are tailored towards the child, promoting their strengths and abilities whilst also working towards overcoming their weak- nesses. If only it was that simple... The problem here is that in addition to the knowledge of the subject matter, human teachers have socio-emotional and empathic abilities which they can use to assess whether or not a child is engaged and whether they are showing adequate progression in the learning task. Replicating these traits within a robotic tutor requires extensive amounts of interdisciplinary research to recognise and understand the behavioural and contextual indicators of engagement. This research will inform the development of a new com- putational model to automatically detect the learners’ state of engagement and to distinguish how much is attributable to the task in comparison to that owing to the social bond with the robot. This model will feed-forward to parallel reasoning systems, providing an informative physiological view of the learner in terms of behavioural, emotional and cognitive state. This state and any pedagogical, socio-emotional and empathic interventions used to support and scaffold the interaction will feed-back into a personalised model of the learner, further enhancing its ability to predict, improve and maintain future states of engagement. The success of this project is dependent upon three key aspects: 1) being able to automatically detect the learner’s engagement with the task, 2) being able to adapt and balance the level of challenge, perception of user-control and the aesthetic/sensory appeal of the task (i.e. on-the-fly) [6] and 3) being able to trigger the correct behaviours and interventions in the robot to build and maintain the engagement [7]. In this project behavioural and contextual indicators such as the learner’s affective state, progress within the task, and touch screen gestures [8] will be explored in fine detail. II. RELATED WORK Recent research in HRI has shown that social and task context play an important role in engagement [9], where engagement was successfully predicted from context logs. Ad- ditionally, recall performance was improved using an adaptive agent which monitored and improved student attention when engagement decreased [10], concluding that agents need to be able to measure and respond to user states if to be integrated in general HCI. Kapoor and Picard [11] used multi-modal Gaussian process to classify interest in a learning scenario; with this in mind, data fusion offers very promising results