AbstractPowered wheelchairs provide the only means of mobility for many people with severe motor disabilities. For those with both lower and upper limbs impairment, available interfaces may be either impossible or very difficult to use, as well as not very efficient. In this paper we propose an egocentric interface based on inertial sensors placed on the user’s head. This interface is based on head movements that provide continuous direction and speed commands to steer the wheelchair, and allows an initial null-position of the head according to the natural posture of the user. However, the development of an inertial interface for driving a wheelchair presents two main challenges, namely, 1) the simultaneous movements of the head and the wheelchair, each one with its own coordinate system, and 2) the free unrestricted movement of the head. Therefore, the two coordinate systems need to be combined and several safety features are required to only ensure admissible commands. In this paper we describe the overall implementation and preliminary experiments that show the effectiveness of the proposed solution. I. INTRODUCTION People who suffer from chronic or acute motor impairments may become unable to use both their lower and upper limbs. For them, the commercially available options to steer a wheelchair are impossible to use (e.g., hand joystick) or may be impractical and difficult to use (e.g., chin joystick, blowing tube, head-switches) [1]. In more severe cases, where motor impairment also affects head movements, several alternative interfaces have been researched, namely based on voice commands [2], ocular movements [3], tongue movements [4], facial expressions [5], electromyography [6] and electroencephalography [7]. Most of these interfaces are not commercially available and are still limited to lab experiments. They may only provide time-sparse commands, require high levels of attention, lead to fatigue or high mental workload, and are prone to errors. Thus, to safely drive a wheelchair in real-world environments, these interfaces need to be supported by an intelligent navigation system that performs or adjusts the trajectory of the wheelchair [7]. When a person with a motor disability is still able to move his/her *This work has been financially supported by the Project B-RELIABLE: PTDC/EEI-AUT/30935/2017 and the Project VITASENIOR-MT: SAICT- POL/23659/2016 with FEDER/FNR/OE funding through programs CENTRO2020 and FCT. Daniel Gomes is with the Polytechnic Institute of Tomar, 2300-313 Tomar, Portugal (e-mail: dpv.gomes@sapo.pt). Filipe Fernandes is with the Polytechnic Institute of Tomar, 2300-313 Tomar, Portugal (e-mail: filipef33@gmail.com). Eduardo Castro is with the Polytechnic Institute of Tomar, 2300-313 Tomar, Portugal (e-mail: eddycastro@live.com.pt). Gabriel Pires is with the Polytechnic Institute of Tomar, 2300-313 Tomar, and also with the Institute of Systems and Robotics, University of Coimbra, 3030-290 Coimbra, Portugal (corresponding author e-mail: gppires@ ipt.pt). head, the most natural solution is to use the head movements as the controller. Unfortunately, head-based commercially solutions are not universally applicable, as they are based on the use of proximity or switch sensors, requiring the user to move the head in a fixed space [8] [9]. Additionally, depending on the severity of the disability, users may not be able to use such systems since their neutral (natural) head position is very specific (e.g., cerebral palsy). In the context of this paper, we are focused on wheelchair interfaces based on head motion. Some prototypes were developed applying inclination sensors [10], infrared cameras [11], Kinect [12], stereoscopic cameras [13], and inertial measurement units (IMU) such as gyroscopes [14] or accelerometers [15] [16] [17] [18] [19]. These interfaces are mostly based on gesture-commands, generating only discrete commands, and often have problems keeping a straight-line movement. In particular, the IMU-based approaches still suffer from drift issues. In [20], a head-interface based on inertial sensors was used to control a 6-DOF manipulator. The manipulator’s coordinate system is static, which greatly simplifies the complexity and challenges of this interface compared to steering a wheelchair. Yet, the proposed solution exemplifies well the high potential of inertial interfaces. Six tetraplegic participants successfully performed pick-and- place tasks, which involved moving the manipulator, controlling a gripper, and switching between modes. Although with high flexibility and interesting features, the use of inertial interfaces for wheelchair control poses several challenges that need to be addressed to achieve an effective and reliable implementation. In this paper, we propose an interface based on inertial sensors that allows the user to control a powered wheelchair with head movements. The user can select the starting position (null/neutral) of the head, making the system adjustable to a larger number of people with abnormal posture. The head command results from the combination of the head’s coordinate system and the wheelchair’s coordinate system, thereby achieving an absolute orientation of the head, regardless of the movements and inclination of the wheelchair. The system applies several levels of safety and movement limits, so that all generated head-commands allow safe and reliable driving. II. METHODS A. Hardware A picture of the wheelchair prototype and hardware components is presented in Fig. 1. The current prototype is composed of three main modules: Head Motion Unit (HMU) wearable headset consisting of one IMU (BNO055) and one Wi-Fi module (ESP8266-Thing), both connected via an I 2 C bus; Head-movement interface for wheelchair driving based on inertial sensors* Daniel Gomes, Filipe Fernandes, Eduardo Castro, Gabriel Pires, Member, IEEE