Exploration Transform: A stable exploring algorithm for robots in rescue environments Stephan Wirth University of Koblenz and Landau Universit¨ atsstr. 1 56070 Koblenz, Germany stwirth@uni-koblenz.de Johannes Pellenz University of Koblenz and Landau Universit¨ atsstr. 1 56070 Koblenz, Germany pellenz@uni-koblenz.de Abstract — Autonomous robots in rescue environments have to fulfill several task at the same time: They have to localize themselves, build maps, detect victims and decide where to go next for further exploration. In this contribution we present an approach that provides a robust solution for the exploration task: Based on the knowledge of the environment that the robot has already acquired, the algorithm calculates a path to the next interesting “frontier”. Our comprehensive approach takes into account the distance to the next frontier and the difficulty of the path for the robot. Those difficulties can result from narrow passages but also from wide, open spaces where the sensors cannot detect any landmark. For the native exploration task, the algorithm is fed with occupancy grids. For the search task, it can also process maps that encode additional information, e. g. places that have not been searched by other sensors yet. The Exploration Transform was successfully implemented on our mobile system Robbie and was used during the RoboCup German Open 2007 and the RoboCup World Championship 2007. Using this approach, our Team “resko” achieved the “Best in Class Autonomy Award” in the Rescue Robot League in both competitions. Keywords: Robotic, RoboCup Rescue, autonomous naviga- tion, path planning, exploration, USAR I. I NTRODUCTION Robots need to know about the structure of their environ- ment to be able to fulfill complex tasks [7]. Anyway, in rescue situations resulting from earthquakes, this knowledge is not available a priori: Even if floor plans of a collapsed building are accessible, they are useless because it is likely that the furniture and even walls have been moved due to the impact. Therefore, the robots have to build their own maps while localizing themselves. This problem is widely known as the SLAM problem (Simultaneous Localization and Mapping) [3], [5]. Beyond that, autonomous robots must also decide where to go next to find out more about the environment (see [2]). This problem can be split into two questions: 1) Select a target: Where should the robot go next to extend the map or search for victims? 2) Choose a path: Which way should the robot take to reach this target? The approach that we present in this paper uses an occu- pancy grid [1], that our mobile system Robbie builds auto- matically while driving through an unknown environment. The grid consists of cells which store the probability that the cell is occupied. To generate this map, the algorithm uses the robot’s odometry data and the sensor readings of a Hokuyo URG- 04LX laser range finder. The map building process uses a par- ticle filter to match the current scan onto the occupancy grid. This approach was successfully tested during the RoboCup World Championship 2006, but relied on a remotely controlled robot. To make the system fully autonomous, the exploration behavior described in the following sections was implemented. The paper is organized as follows: In section II, the related work is presented. The different approaches for exploration and path planning are explained. In section III the Exploration Transform is developed as a combination of the solution discussed in section II. It is extended to encourage the robot to stay into the sight of walls, so the sensors can always detect some landmarks. This behavior is known as “coastal navigation” [6]. The complete algorithm was implemented on our rescue robot Robbie. Section IV describes the results of the experiments we conducted in different test arenas, including the rescue arenas of the RoboCup German Open 2007 and the RoboCup World Championship 2007. II. RELATED WORK A. Exploration In [8], Brian Yamauchi proposes a frontier-based approach to find the next exploration target. Occupancy grids are used as an input for the algorithm. The key idea is as follows: In order to get new information, go to a frontier that separates known from unknown regions. Such a frontier is a cell in the occupancy grid that is marked as free but has a neighboring cell that is marked unknown. A segment of adjacent frontier cells is considered as a potential target if it is large enough so that the robot could pass it. If more than one potential target is detected in the occupancy grid, then the closest target is selected. An example of a floor plan and the extracted frontiers is given in Fig. 1. Gonz´ alez-Ba˜ nos and Latombe propose another approach to select the next target point: In [2] they present an algorithm that finds the point in the map from which the sensors can be used optimal to extend the map (“next-best-view problem“). To find this point, candidate target points around a frontier are generated. For each candidate point, the size of the area