Mobile Robot Localization Based on an Omnidirectional Stereoscopic Vision Perception System Cyril DROCOURT, Laurent DELAHOCHE, Claude PEGARD, Arnaud CLERENTIN GRACSY – 7 rue du Moulin Neuf, 80000 Amiens – FRANCE Cyril.Drocourt@iut.u-picardie.fr , Laurent.Delahoche@u-picardie.fr Abstract This paper presents a system of absolute localization based on the stereoscopic omnidirectional vision. To do it we use an original perception system which allows our omnidirectional vision sensor SYCLOP to move along a rail. The first part of our study will deal with the problem of building the sensorial model with the help of the two stereoscopic omnidirectional images. To solve this problem we propose an approach based on the fusion of several criteria which will be made according to Dempster-Shafer rules. As for the second part, it will be devoted to exploiting this sensorial model to localize the robot thanks to matching the sensorial primitives with the environment map. We analyze the performance of our global absolute localization system on several robot’s elementary moves, in different environments. I.Introduction The analysis of the navigation problem given by Leonard et Durrant-Whyte : “Where am I”, “Where am I going”, and “How should I get there” [5], shows that localizing the robot in its evolution environment constitutes a prerequisite step to any decision making. Localization methods which can satisfy the constraints imposed by the navigation, are based on locating artificial beacons. This kind of localization system is generally employed for industrial applications as for example the navigation system developed by Durrant-Whyte [7] for an Autonomous Guided Vehicle which transports containers. These methods are fast and reliable, but unfortunately they lack flexibility and modularity because it is necessary to fit out the robot's evolution environment. Another category of method consists in referencing directly on characteristic elements of the robot’s evolution environment. Indeed, these solutions offer a great modularity and allow the robot to localize itself in accordance to the landmarks. This kind of localiz ation is generally founded on a matching stage between a sensorial model and a theoretic map of the environment. The perception systems used in that case are often the vision systems and the range finding ones. Thus, Gonzalez in [8] determines the absolute position of its robot by using the line segments as sensorial primitives. These are obtained thanks to a rotating laser rangefinder. In [5], Leonard develops a method of dynamic localization based on the location of “geometric beacons”, which are detected by a belt of ultrasonic sensors. These geometric beacons are determined thanks to regions of constant depth (RCD). Krotkov [1], and Atiya [2], use a CCD camera to detect vertical lines of the environment as natural beacons. Similarly Yagi uses an omnidirectional vision system to develop navigation and environment map building methods [3][4]. We can notice that the robustness of this kind of localization methods is mainly linked to the matching stage. The more precise and rich information the sensorial model will give, the more robust the matching stage will be. That is why we have worked on an original method of sensorial model building based on the use of a stereoscopic omnidirectional perception system. The first part of this paper presents the principle of our stereoscopic omnidirectional perception system. The second part will deal with our sensorial model building method using the multicriteria fusion, made according to Dempster-Shafer rules. Our absolute localization method will be presented in the last part. In the conclusion we will analyze the experimental results reached with our mobile robot SARAH. II.Description of the stereoscopic system The stereoscopic omnidirectional sensor put on our mobile robot SARAH is based on the rigid translation of the omnidirectional vision system SYCLOP used in our laboratory [6]. The SYCLOP system is similar to the COPIS one used by YAGI [4] and is composed of a conic mirror and a CCD camera (Figure 1). It allows us to detect all the vertical landmarks of the environment thanks to a dimensional projection. The rigid translation has been made thanks to two horizontal rails which allow a precise straight move in the horizontal SYCLOP sensor plan. Thus the system insures the acquisition of two omnidirectional images of the environment within 40 centimeters of one another. The distance between these two shots has been determined experimentally by making a compromise between