Augmented Reality Controlled Smart Wheelchair Using Dynamic Signifiers for Affordance Representation Rodrigo Chac´ on-Quesada and Yiannis Demiris 1 Abstract— The design of augmented reality interfaces for people with mobility impairments is a novel area with great potential, as well as multiple outstanding research challenges. In this paper we present an augmented reality user interface for controlling a smart wheelchair with a head-mounted dis- play to provide assistance for mobility restricted people. Our motivation is to reduce the cognitive requirements needed to control a smart wheelchair. A key element of our platform is the ability to control the smart wheelchair using the concepts of affordances and signifiers. In addition to the technical details of our platform, we present a baseline study by evaluating our platform through user-trials of able-bodied individuals and two different affordances: 1) Door - Go Through and 2) People - Approach. To present these affordances to the user, we evaluated fixed symbol based signifiers versus our novel dynamic signifiers in terms of ease to understand the suggested actions and its relation with the objects. Our results show a clear preference for dynamic signifiers. In addition, we show that the task load reported by participants is lower when controlling the smart wheelchair with our augmented reality user interface compared to using the joystick, which is consistent with their qualitative answers. I. I NTRODUCTION Controlling a powered wheelchair can be a cognitively challenging task for some users [1]. To enable easier con- trol of powered wheelchairs, a variety of control interfaces have been proposed; the most traditional method used are joysticks, but others include fingertip control, head tilt, electromyography and/or electroencephalogram signals [2], [3], [4], [5]. AR User Interfaces (UIs) have the potential of introducing more natural interactions between the user and the robot that are easier to understand and learn. Even though mobile Augmented Reality (AR) interfaces for wheelchairs have been previously proposed [6], [7], [8], to the best of our knowledge these interfaces have never been used before as a way to enable control, but instead mainly as a way of providing visual aids. In this paper, we propose to use a head-mounted display (HMD) AR UI as a new method to controlling a smart wheelchair. For this, we integrated a smart wheelchair with an AR UI developed for the Microsoft HoloLens. Our platform jointly locates the objects and people in the smart wheelchair’s surroundings and informs the users through the 1 The authors are with the Personal Robotics Laboratory, Dept. of EEE, Imperial College London, SW7 2AZ, UK. {r.chacon-quesada17, y.demiris}@imperial.ac.uk. Rodrigo Chacon-Quesada is supported by a PhD studentship jointly funded by the University of Costa Rica and Ministry of Science Technology and Telecommunications of Costa Rica. Yiannis Demiris is supported by a Royal Academy of Engineering Chair in Emerging Technologies. Lab info and videos: www.imperial.ac.uk/PersonalRobotics Fig. 1: Composite image of the visualisations rendered on the users view through the AR headset. (1) Smart Wheelchair. (2) We use the marker symbols to represent locations. (3) People - Approach affordances are placed when persons are detected. (4) The cursor and timing bar give feedback to the user during the aiming and selection stages respectively. (5) The different aiming methods available can be selected by the user using an on-board menu. (6) Door - Go Through af- fordances are shown when open doors are detected. Selected signifiers are shown in green and non-selected signifiers in purple. use of signifiers about their affordances. Affordances are defined as the relationship between the properties of an object and the capabilities of an agent (whether human, animal or robot) that determine how the detected objects can be used to accomplish higher level tasks [9], [10]. For example, a door is signified as ”passable” if its width characteristics are compatible with the size of the wheelchair. A composite image showing some of the visualisations rendered on the users view through the HMD is shown in Fig. 1. Our platform has the potential to serve a variety of mobility impairments, such as quadriplegia, amputations and cerebral palsy, given the multi-modality of input methods provided within our HMD UI, while at the same time aims at reducing the cognitive requirements needed to control the smart wheelchair by assisting the user in the perception and planning tasks naturally involve in this kind of activity. II. RELATED WORK A. Virtual and Augmented Reality for Smart Wheelchairs To enable easier control of wheelchairs, a variety of control interfaces have been proposed. Leaman et al. [5] give Preprint version; final version available at https://ieeexplore.ieee.org/abstract/document/8968290 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2019), pp: 4812-4818 Published by: IEEE DOI: 10.1109/IROS40897.2019.8968290