A Multimodal System using Augmented Reality, Gestures, and Tactile Feedback for Robot Trajectory Programming and Execution Wesley P. Chan 1* , Camilo Perez Quintero 1* , Matthew K. X. J. Pan 1 , Maram Sakr 1 , H.F. Machiel Van der Loos 1 , and Elizabeth Croft 2 Abstract— Currently available interfaces for programming industrial robots, e.g., teach pendants and computer consoles, are often unintuitive, resulting in a slow and tedious process for teaching robot tasks. Kinesthetic teaching, i.e., teaching robot motions by placing the robot in a gravity compensated state and then moving the robot though the desired motions, provides an alternative for small robots for which safe interaction can be guaranteed. However for many large industrial robots physical interaction is not an option. Emerging augmented reality technology offers an alternative interface with the potential to make robotic programming faster, safer, and more intuitive. The use of augmented reality admits the presentation of large amounts of rich, visual, in-situ information. However, it may also overload the user’s visual information capacity, or may not provide sufficient feedback regarding the state of the robot. With the addition of gestural control and tactile feedback to augmented reality, we propose a system that allows users to program and execute robot tasks in an efficient and intuitive manner, by providing relevant feedback through different channels to maximize clear communication of the task commands and outcomes. I. INTRODUCTION For decades, teach pendants, augmented with computer consoles, have been the de facto interface for programming industrial robots. Over time, this programming modality has seen little change, mainly due to the infrequency of robot programming for fully automated tasks once an assembly operation is set-up. However, the recent introduction of less- expensive and increasingly interactive robots has allowed for some flexibility in the manufacturing process. Thus, the infrequency of programming and reprogramming that was traditionally expected for industrial service robots may not apply. For example, kinesthetic teaching of robots such as Baxter, Sawyer, and the KUKA iiwa allows for easy and frequent reprogramming, permitting these robots to produce customized products in small lot sizes and thus able to generate a highly variable product mix. As a result, a paradigm shift has been observed where the industry is mov- ing from large scale full automation, using only robots, to smaller scale reconfigurable robots and human-robot hybrid collaborative teams [1], [2], [3]. Along with this shift comes new requirements for flexible and intuitive methods to reprogram and interact with such in- dustrial robots, in order to ensure safety and efficiency. Along 1 Collaborative Advanced Robotics and Intelligent Systems (CARIS) Laboratory, Department of Mechanical Engineering, University of British Columbia 2 Monash University * These authors contributed equally. Fig. 1. A user showcases our multimodal system by following a sine force pattern (pink line). The user controls the normal force exerted on the surface (blue arrow) and the end effector linear velocity by moving his forearm and changing his muscle activation level. with kinesthetic teaching methods, emerging augmented re- ality (AR) technology provides a promising alternative to traditional teach pendants for addressing such requirements. With increasing complexity of industrial robotic systems, which may not be safe for physical interaction, there is a growing demand for alternative user interfaces for robot programming. This alternative should provide sufficient ca- pacity for communicating all the necessary information to the user, without adding a layer of complexity that distracts the user from the task [4], [5], [6]. Traditional programming methods lack such capacity, and often result in a cumbersome interaction. Augmented reality enables us to create a rich set of user interfaces that are co-located with the robot, allowing the user to have better situational awareness [7]. Furthermore, it permits visualization and interaction with hidden process variables that in traditional programming methods are not exposed to the operator during execution (e.g., force, velocity, acceleration). By improving the quality of shared information between human and robot, we can achieve more effective human-robot interaction [8]. With augmented reality devices and development tools such as the Microsoft HoloLens [9], Epson Moverio [10], and Magic Leap [11] increasingly available, researchers have explored the use of augmented reality for various tasks including assembly [12], maintenance [13], repair [14], and training [15] and found positive results. While augmented