An Experience-Driven Robotic Assistant Acquiring Human Knowledge to Improve Haptic Cooperation Jos´ e Ram´ on Medina Martin Lawitzky Alexander M¨ ortl Dongheui Lee Sandra Hirche Institute of Automatic Control Engineering Technische Universit¨ at M¨ unchen 80290 Munich, Germany Email: {medina, ml, moertl, dhlee, hirche}@tum.de Abstract— Physical cooperation with humans greatly en- hances the capabilities of robotic systems when leaving stan- dardized industrial settings. Our novel cognition-enabled con- trol framework presented in this paper enables a robotic assis- tant to enrich its own experience by acquisition of human task knowledge during joint manipulation. Our robot incrementally learns semantic task structures during joint task execution using hierarchically clustered Hidden Markov Models. A se- mantic labeling of recognized task segments is acquired from the human partner through speech. After a small number of repetitions, the robot uses an anticipated task progress to generate a feed-forward set point for an admittance feedback control scheme. This paper describes the framework and its implementation on a mobile bi-manual platform. The evolution of the robot’s task knowledge is presented and discussed. Finally, the cooperation quality is measured in terms of the robot’s task contribution. I. I NTRODUCTION As robots are entering new domains starting to provide close physical assistance to human workers, a strong need for the ability to learn semantic task knowledge from human co- workers arises. Any approach of pre-programming all possi- ble interaction behaviors for all possible combinations of task goals is infeasible for fairly unstructured settings of human manual work. Instead, in our opinion, a cognition-enabled robotic co-worker is expected to implement a learning-by- doing strategy for physical interaction tasks. This implies that the robot starts as a rather passive pack mule being guided by a human partner. In order to exploit the naturally given cognitive capabilities of the human co-worker, a cognition- enabled robot observes the human task contribution in terms of physical signals and learns how to recreate the comple- mentary patterns. Furthermore, the authors are convinced that a cognition-enabled robotic assistant should enrich its own experience by acquisition of meaningful semantic labels from dialogue with human co-workers as a basis for further linguo-haptic interaction improvement. Neuroscientific find- ings second this opinion as loop closure around sensory- motor observation, imitation and control through explicit communication is observed in action reproduction from human-human settings [1]. Exemplarily, in this work, we address the problem of joint bulky object transportation, as illustrated in Fig. 1, however, the conceptual approach is not limited to this application. This task is specifically challenging due to the tight physical Fig. 1. Experimental scenario: Human and robot jointly carrying a bulky bumper to its mounting location during car restoration. coupling between human and machine which inseparably serves as a channel for energy but also information exchange. Caster-like robot partner behavior reactively compensating the object dynamics is well-suited for human-robot joint bulky load transport which is nicely shown in [2] and [3]. However, such a follower strategy implements merely a trol- ley for heavier loads rather than an actual cooperation partner and, while simple tasks can successfully be fulfilled, more complex tasks including environmental constraints typically require an active contribution to the task by the robot [4]. Active robotic assistance also reduces the effort applied by the human partner [5], [6], [7]. In order to plan the next robot action for assistance to the human, the next human action needs to be predicted. For simple motor tasks, findings from human motor behavior are considered for movement pre- diction, for example the well-known minimum jerk velocity profile for point to point movements [5], [8]. However, for more complex tasks there are not any such analytical models currently available. In consequence, learning from observa- tion approaches have become a favorable method to address these challenges. We have investigated the theoretical back- ground on task dynamics in joint manipulation and incre- mental learning for physical human-robot interaction [4], [9]. Remaining open questions include the implementation of an actively contributing robotic partner in joint human-robot