Teaching Nullspace Constraints in Physical Human-Robot Interaction using Reservoir Computing Arne Nordmann, Christian Emmerich, Stefan Ruether, Andre Lemme, Sebastian Wrede and Jochen Steil Abstract— A major goal of current robotics research is to enable robots to become co-workers that collaborate with humans efficiently and adapt to changing environments or workflows. We present an approach utilizing the physical interaction capabilities of compliant robots with data-driven and model-free learning in a coherent system in order to make fast reconfiguration of redundant robots feasible. Users with no particular robotics knowledge can perform this task in physical interaction with the compliant robot, for example to reconfigure a work cell due to changes in the environment. For fast and efficient learning of the respective null-space constraints, a reservoir neural network is employed. It is embedded in the motion controller of the system, hence allowing for execution of arbitrary motions in task space. We describe the training, exploration and the control architecture of the systems as well as present an evaluation on the KUKA Light-Weight Robot. Our results show that the learned model solves the redundancy resolution problem under the given constraints with sufficient accuracy and generalizes to generate valid joint- space trajectories even in untrained areas of the workspace. I. I NTRODUCTION The configuration problem of advanced robotics systems is entitled as one of the major challenges of current robotics research [1]. The potential gain in efficiency through au- tomation technology is often reduced through high costs for the reconfiguration of robot systems due to adaptation of manufacturing processes. Configuration changes yield high costs for manual re-programming and testing of robotic systems and their accompanying software. Future applications will employ robotic systems for tasks such as multi-part assembly and use several non-standard, e.g., redundant manipulators or other specific tools in close HRI scenarios [2] resulting in even more complex adaptation processes. While redundant manipulators provide high flex- ibility for the realization of complex scenarios, e.g., in car- manufacturing [3], the gained flexibility induces additional challenges such as solving inverse kinematics with redun- dancy resolution in joint-space for arbitrary movements. Analytic approaches for solving the inverse kinematics un- der specific constraints [4], [5] usually require expert knowl- edge, the availability of rigorous kinematic models of the robot and tedious manual programming. In order to minimize manual programming effort, we present a programming-by- demonstration approach for redundancy resolution based on physical human-robot interaction [6] (pHRI), neural learning and a hybrid control scheme. A. Nordmann, C. Emmerich, S. Ruether, A. Lemme, S. Wrede and J. Steil are with the Research Institute for Cognition and Robotics, Bielefeld University, P.O. Box 100131, Bielefeld, Germany [anordman,jsteil]@cor-lab.uni-bielefeld.de Fig. 1. During execution of trajectories, the FlexIRob system respects null-space constraints taught in physical human-robot interaction. Our approach allows a single human tutor to efficiently teach a compliant robot several null-space constraints in different areas of the workspace. Users with no particular robotics knowledge can perform this task in physical inter- action with the robot system in a few minutes by recording a small set of training examples. During an exploration phase, training data is recorded, which serves as input to a purely data-driven learning algorithm. A single recurrent neural network encodes the inverse kinematics mapping with null- space constraints. After training, the learned inverse kinemat- ics controller is embedded in a hybrid control architecture, allowing for execution of arbitrary motions in task space, respecting the learned null-space constraints. The approach is developed and evaluated in a robotics system concept termed FlexIRob (Flexible Interactive Robot) using a recent version of the KUKA Light-Weight Robot [7] (LWR IV). The remainder of the paper is structured as follows: Section II presents an overview of the interaction work-flow facilitating fast reconfiguration. Subsequently, Section III introduces the applied learning algorithm which allows the encoding of inverse kinematics mapping with null-space constraints in a single recurrent neural network. Section IV describes the hybrid robot control and software architecture while Section V presents results from task-oriented evalu- ation experiments. Last but not least, Section VI reviews related work on redundancy resolution, physical interaction and learning for constrained movement generation and adap- tation before Section VII concludes this contribution. II. I NTERACTION MODEL In the following, we give an overview about the fundamen- tal modus operandi, the integration of physical human-robot interaction and the work-flow of the entire reconfiguration procedure. We point out, that the entire reconfiguration procedure can be done within few minutes (depending on the number of different areas in the workspace). It is managed