PathFinder: An autonomous mobile robot guided by Computer Vision Andre R. de Geus 1,2 , Marcelo H. Stoppa 1 , Sergio F. da Silva 1,2 1 Modeling and Optimization Program, Federal University of Goias, Catalao, Goias, Brazil 2 Biotechnology Institute, Federal University of Goias, Catalao, Goias, Brazil Email: geus.andre@ufg.br, mhstoppa@gmail.com, sergio@ufg.br Abstract— Localization is a key research topic in mobile robotics, responsible to assist robotic systems equipped with sensors, to navigate with certain autonomy. Unfortunately, the sensors shows frequently reading errors that disturb its location. In this paper, we describe the development of a computer vision system for autonomous navigation of a robot in a simulated environment. The system uses a unattached camera to detect the robot, concentrating the localization problem in the feature extraction of the images. Also, it uses Artificial Intelligence algorithms to determine the path in order to find the best solution. Our results show that the robot was able to follow the path and reach the goal, validating the proposed method. Keywords: Autonomous Navigation, Path Planning, Artificial Intelligence, Computer Vision 1. Introduction According to [10], mobile robots are automatic transport devices, indeed, mechanical platforms equipped with a lo- comotion system able to navigate through a given environ- ment, endowed with a certain level of autonomy to their locomotion. Autonomy is not just about energy sufficiency issues, but also the processing capability to plan and execute some tasks. Navigation in unknown environments is one of the areas with great interest to mobile systems, offering wide range of applications, from systems that assist in-house cleaning [5] to dangerous operations of search and rescue [6]. Navigation is an intrinsic feature of robots, allowing them to move freely into its environment until reach its goal. According to [7], dead-reckoning is a classic method of navi- gation, whose accuracy depends directly on the quality of the sensors used. Given that its location is based in previously locations, is inevitable the accumulation of position errors. The compensation of position errors demands integration of multiple sensors. Combining their data, improves signif- icantly the estimation of robots location in its environment. The technique proposed by [3] uses sensors that capture the direction, acceleration and engine revs. The authors demon- strate various improvements considering that heterogeneous sensors have different perceptions and can cooperate with each other. Looking to increase the accuracy, adding information extracted from images provided by a webcam, shows to be an potential alternative to assist the path planning and execution. This is the motivation for the proposed work, that uses a webcam with panoramic view, to guide a robot to a goal, without human intervention and use of sensors. 2. Problem Description The evaluated system consists of a robot with colourful parts that assist in determining its position, which is built under the Mindstorms NXT ® platform. Big boxes are used to simulate obstacles that blocks the robots way. The goal is represented by a small sheet on the floor. A low-resolution webcam positioned on the environment’s ceiling. The pro- cessing is performed by a laptop connected to the webcam, which sends the movement commands to the NXT ® . Fig. 1 illustrates the environment layout and its components, where the target is a green rectangle, and the obstacle is a white rectangle. Fig. 1: Side view PathFinder’s environment layout The environment layout has restrictions on the objects colours, due to the filters used to localize them. The robot has blue colour on its head and red in its tail (Fig. 2). The obstacles are white boxes of the NXT ® kit and the goal is a small green sheet. Int'l Conf. Artificial Intelligence | ICAI'15 | 55