Mobile Teleoperation Interfaces for Domestic Service Robots Max Schwarz, J¨ org St¨ uckler, and Sven Behnke Abstract— Domestic service robots are envisioned to provide assistance to persons in need of help with their activities of daily living. These tasks require a comprehensive set of perception, control, and planning skills—beyond the state of the art of autonomous robots. On the other hand, direct control of complex robots requires special equipment and the full attention of the operator. Hence, it is necessary to combine state-of-the-art autonomous capabilities with the intelligence of users in a complementary way. We report on handheld user interfaces for domestic service robots that allow for teleoperating the robot on three levels of autonomy: body, skill, and task control. On the higher levels, autonomous behavior of the robot relieves the user from significant workload. If autonomous execution fails, or autonomous functionality is not provided by the robot system, the user can select a lower level of autonomy to solve a task. The benefits of providing adjustable autonomy in teleoperation have been successfully demonstrated at RoboCup@Home competitions. I. I NTRODUCTION Domestic service robots that shall assist persons in their activities of daily living (ADL) not only require versatile autonomous skills, but also intuitive user interfaces. While natural interaction modes such as speech, facial expressions, and gestures work well when the robot is in direct vicinity of the user, many application scenarios require that robot and user are at different places in a home. For example, the mobile robot could fetch objects requested by a user with mobility restrictions. Hence, there is a need for controlling the robot from a distance. While complex teleoperation interfaces, such as exoskeletons, have been developed for the direct control of robots, they are quite impractical to use for persons in need of assistance, because they are sta- tionary, bulky, and require the full attention of the operator. Lightweight mobile computers with touch screens are already used for many applications and have been shown to provide intuitive interfaces for teleoperation of mobile robots. Many high-level tasks that are difficult to achieve au- tonomously can already be tackled when the intelligence of the user is combined with robot skills. In this way, both sides contribute their strengths and complement each other. When the robot performs mosts tasks autonomously, it relieves the user from tedious and time-consuming low-level control. Only in difficult situations, in which no autonomous solution exists yet, the user should need to take over control of the robot on the most convenient, i.e. most autonomous, level. We propose handheld user interfaces that allow persons to teleoperate a complex anthropomorphic service robot on three levels of autonomy. The user adjusts the autonomy Autonomous Intelligent Systems, Computer Science Institute VI, Univer- sity of Bonn, 53113 Bonn, Germany max.schwarz@uni-bonn.de Fig. 1. Skill and body-level GUI with a view selection column on the left, a main view in the center, a configuration column on the right, and a log message window on the bottom center. Motion is controlled directly with two joystick control UIs. level according to the complexity of the task and available robot capabilities. The notion of adjustable autonomy has been coined by Goodrich et al. [1]. Leeper et al. [2] evaluate body-level teleoperation GUIs for mobile manipulation. We identified body, skill, and task level autonomy [3], [4] and develop teleoperation interfaces for our domestic service robots Dynamaid and Coseros [5] on these levels. Our robots have mobile manipulation and human-robot interac- tion capabilities to perform tasks autonomously in everyday environments. This involves grasping and placing of objects and safe omnidirectional locomotion. II. HANDHELD USER I NTERFACES Handheld computers such as smart phones, tablets and slates provide a touch-sensitive display suitable for the implementation of teleoperation interfaces which visualize robot sensor data and estimated state for situation awareness and allow for finger-based control. The GUI can be designed to mediate what the robot actually can do, which improves common ground. For instance, the user may only be given the choice between possible actions and involved objects and locations. We investigate teleoperation with a handheld computer on three levels of autonomy. On the body level, the operator directly controls robot body parts such as the end- effectors, the gaze direction, or the omnidirectional drive. On the skill level, the operator controls robot skills, e.g. by setting navigation goals or commanding objects to be grasped. Finally, the operator configures autonomous high- level behaviors that sequence skills on the task level. Body-Level Teleoperation: The body-level controls allow the user to execute actions with the robot that are not covered by autonomous skills and therefore are not supported on