Obstacle Avoidance using Event-based Visual Sensor and Time-To-Contact Processing Fabien Colonnier, Luca Della Vedova, Rodney Swee Huat Teo, Garrick Orchard Temasek Laboratories, National University of Singapore, Singapore fabien.colonnier@nus.edu.sg Abstract Optic Flow is known to be useful in detect- ing obstacles and measuring Time-To-Contact, while event-based vision sensors have recently emerged as an efficient low-latency alternative to traditional frame-based vision sensors. This paper combines these two areas to present a vi- sual collision avoidance system based on Optic Flow and Time-To-Contact computation using an event-based sensor. For demonstration, a quadrotor was fitted with the system and col- lision avoidance was tested. The quadrotor is shown successfully evading obstacles while fly- ing at speeds up to 2.5m.s 1 . The quadrotor performs an evasive manoeuver which can ei- ther be a turn away from the obstacle, or a complete stop (if no safe forward path is de- tected). An example of an obstacle detection shows that the maximal Time-To-Contact er- ror is below 1.2s. A video of the different ex- periments is provided as supplementary data. 1 Introduction Recently, Unmanned Aerial Vehicles (UAVs) have been finding increasingly widespread application including photography, film [Cheng, 2015], visual inspection [Nikolic et al., 2013], art [Schoellig et al., 2014], docu- menting athletes [Dasgupta et al., 2018], home delivery [Hoareau et al., 2017], and as rescue operations [Michael et al., 2014]. The use of UAVs requires robust perception of the environment to avoid obstacles in these different scenarios, especially at high speeds. For large rotorcraft [Scherer et al., 2008] and winged aerial vehicles [Bry et al., 2012], LIDAR can be used with great efficacy, although its cost still proves prohibitive in many applications. On the other hand, visual sens- ing provides a cheaper, passive, lower power alternative, but requires more complex algorithms to achieve similar robustness. As speed increases, the quality of the visual data typically decreases due to motion blur, thus degrad- ing system performance. Nevertheless, frame-based vi- sion sensors have been shown to perform well to generate a collision free trajectory [Shen et al., 2012]. Limitations on the computational load and accuracy of the mapping leads to still use LIDAR for fast flight [Mohta et al., ]. More recently, deep learning techniques have been used in navigation [Gandhi et al., 2017] and to prevent colli- sion [Loquercio et al., 2018]. Optic Flow (OF) is known to be used by insects for navigation [Land and Nilsson, 2012], leading researchers to attempt to mimic insect behaviours in artificial sys- tems. The centering-reflex of bees navigating through a corridor suggested by [Srinivasan et al., 1996] has been reproduced on a mobile robot using an OF based con- troller [Santos-Victor and Sandini, 1997]. Others later improved the corridor navigation with more elaborated algorithms. Conroy et al. used a Wide-Field integra- tion algorithm, previously presented for a hovering task [Humbert et al., 2007], to provide a navigation signal from local OF measurements [Conroy et al., 2009]. Zingg et al. used a LKT algorithm [Shi and Tomasi, 1994] to compute OF at 20Hz [Zingg et al., 2010] and an esti- mation of the speed to navigate safely. Roubieu et al. managed to perform navigation in different corridor con- figurations accounting from new biological findings [Ser- res et al., 2008], using minimalistic bio-inspired OF sen- sors [Roubieu et al., 2014]. Some applications adopted a downward facing camera to compute OF with elaborate sensor fusion [Bristeau et al., 2011], or in combination with sonar measurements to provide a speed estimation [Honegger et al., 2013]. The first flying robot to per- form obstacle avoidance using to OF was using the I2A algorithm [Srinivasan, 1994] onboard a lightweight fixed- wing vehicle [Zufferey and Floreano, 2006]. Computing the Time-To-Contact from OF measure- ments has already been demonstrated [Camus, 1995] and used in obstacle avoidance on ground vehicles [Coombs et al., 1998; Song and Huang, 2001]. Oth- ers based their obstacle detection on the Locust LGMD