RFID-based topological and metrical self-localization in a structured environment Y. Raoui 1 M. G¨ oller 2 M. Devy 1 T. Kerscher 2 J.M. Z¨ ollner 2 R. Dillmann 2 A. Coustou 1 1 CNRS; LAAS; 7 avenue du Colonel Roche, 31077, Toulouse, France Universit´ e de Toulouse; UPS, INSA, INPT, ISAE; LAAS; Toulouse, France 2 Forschungszentrum Informatik an der Universit¨ at Karlsrruhe (FZI), Interactive Diagnosis and Service systems (IDS) Haid-und-Neu-Str. 10-14, 76131 Karlsruhe, Germany Abstract— This paper describes several methods proposed for RFID-based self-localization for a Trolley Robot executing motions and interacting with a User in a store: such a robot must know precisely its position so that it can guide the User until shelves where Products are put on display. The robot position is expressed as a vector (area, x, y, θ), so that the localization is both topological (to determine when the robot goes from one area to another one) and metrical (to know where is the robot with respect to an area reference frame). It is proposed two different strategies based on RFID tags for the topological and metrical localization, either with tags merged in the ground or with tags set on the shelves. Experiments or simulations are presented, and a final discussion stresses the pros and cons of every solution. Index Terms— self-localization, topological map, RFID tags, landmarks I. INTRODUCTION New challenges for roboticists and new markets for robot makers, come from advanced services proposed by robots to humans in public areas. Many on-going projects study Guide Robots for museum, Person Movers in pedestrian streets, Assistant Robots for elder and disabled people at home or in hospitals... [12], [10] This work aims at developping Advanced Behaviours for a Trolley robot that must assist a User, when doing shopping in a commercial center: our current demonstrator is presented on figure 1 on the left, while a guide robot developped at LAAS is shown on the right. The Trolley is endowed with several sensors in order to detect, track and identify its User (vision, Radio Frequency Identification i.e. RFID, audio), to interact with him (haptic, stereo, audio), to navigate safely in the store detecting and avoiding obstacles (Laser Range Finder, belt of micro- cameras) and finally, to locate itself (RFID, vision). Several User-Trolley interaction modes have been defined in [7]: • In the Steering and Following Modes, the User knows the store and does not need to be guided: in the Steering mode, the Trolley could be used as a manual one, but with an active control by the User thanks to an This work is supported by the EU STREP Project Commrob funded by the European Commission Division FP6 under Contract FP6-045441. Corresponding author: michel@laas.fr Fig. 1. The Shopping Trolley demonstrator from FZI (left); the RACKHAM demonstrator from LAAS (right). haptic handle; in the Following Mode, the User asks the Trolley to follow him, using visual servoing to control robot motions without contact. • On the contrary, in the Guiding and Autonomous Modes, the Trolley has to plan and execute trajectories in the store. In the Guiding mode, the User enters a list of Products to be purchased; the Trolley guides the User along an optimal trajectory in the store, managing the distance with the User. In the Autonomous Mode, the User can order the Trolley to go autonomously until a meeting point. This paper focuses only on the self-localization function, required mainly in the Guiding or Autonomous modes when the robot navigates towards a given objective (in fact the exact robot position will be tracked all the time even when the user is pushing the trolley, because the mode can sud- denly be changed). The Trolley must know precisely its position and orientation, so that it can reach the objective with a good accuracy, i.e. at maximum, 25cm. from the goal, typically the shelf where the next product to be purchased is