A Visual SLAM System on Mobile Robot Supporting Localization Services to Visually Impaired People Quoc-Hung Nguyen 1 , Hai Vu 1 , Thanh-Hai Tran 1 , David Van Hamme 2 , Peter Veelaert 2 , Wilfried Philips 2 , Quang-Hoan Nguyen 3 , 1 International Research Institute MICA, Hanoi University of Science and Technology 2 Ghent University/iMinds (IPI) 3 Hung Yen University of Technology and Education {quoc-hung.nguyen,hai.vu,thanh-hai.tran}@mica.edu.vn dvhamme@telin.ugent.be, peter.veelaert@hogent.be wilfried.philips@ugent.be, quanghoanptit@yahoo.com.vn Abstract. This paper describes a Visual SLAM system developed on a mobile robot in order to support localization services to visually impaired people. The proposed system aims to provide services in small or mid-scale environments such as inside a building or campus of school where conventional positioning data such as GPS, WIFI signals are often not available. Toward this end, we adapt and improve existing vision-based techniques in order to handle issues in the in- door environments. We firstly design an image acquisition system to collect vis- ual data. On one hand, a robust visual odometry method is adjusted to precisely create the routes in the environment. On the other hand, we utilize the Fast-Ap- pearance Based Mapping algorithm that is may be the most successful for match- ing places in large scenarios. In order to better estimate robot’s location, we uti- lize a Kalman Filter that combines the matching results of current observation and the estimation of robot states based on its kinematic model. The experimental results confirmed that the proposed system is feasible to navigate the visually impaired people in the indoor environments. Keywords: Visual Odometry, Place Recognition, FAB-MAP algorithms, Kal- man Filter 1 Introduction Autonomous localization and navigation are extreme desirable services for visually im- paired people. Most commercial solutions are based on the Global Positioning System (GPS), WIFI, LIDAR, or fusion of them. iNavBelt uses ultrasonic sensors to procedure a 120-degree wide view ahead of the user [19]. GuideCane has an ultrasonic sensor head mounted on a long handle [3] . The EyeRing developed by MIT’s Media Lab, is a finger-won device that translates images into aural signals. Although such kind of devices are useful to blind/visually impaired people in some environments, major draw- backs are that they only give limited information, and require well-focused user control.