SLAM for Robotic Assistance to fire-Fighting services Sid Ahmed Berrabah *,** , Yvan Baudoin * , Hichem Sahli ** * Royal Military Academy of Belgium (RMA), Av. de la Renaissance 30, B1000 Brussels, Belgium Corresponding author E-mail: sidahmed.berrabah@rma.ac.be ** Vrije Universiteit Brussel (VUB), Pleilaan 2, B-1050 Brussels – Belgium Abstract - In the event of an emergency, due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced base station One of the problems tackled in this project is the robot navigation. The used robot for the outdoor scenario is equipped with a set of sensors: camera, GPS, inertial navigation system (INS), wheel encoders, and ultrasounds sensors. The robot uses a Simultaneous Localization and Mapping (SLAM) approach to combine data from different sensors for an accurate positioning. The paper gives an overview on the proposed algorithm. Index Terms - View-Finder Project, Risky Intervention, Mobile Robots, Visual Simultaneous Localization and Mapping I. INTRODUCTION The objective of the View-Finder project is to develop and use advance robotics systems, equipped with a wide array of sensors and cameras, to explore a crisis ground in order to understand and reconstruct the investigated scene and thus to improve decision making. Using robotics in this type of scene needs to be with high precision. This contribution introduces the increase of mobile robot positioning accuracy using a SLAM approach. The SLAM algorithm uses data from a single monocular camera together with data from other sensors (Global Positioning System (GPS), Inertial Navigation System (INS) and wheel encoders) for robot localization in large-scale environments. The SLAM problem is tackled as a stochastic problem and it has been addressed with approaches based on Bayesian filtering [1-5]. The main problem of those approaches is that the computational complexity growth with the size of the mapped space, which limits their applicability in large-scale areas. In the case of vision based SLAM approaches, other challenges have to be tackled, as the high rate of the input data, the inherent 3D quality of visual data, the lack of direct depth measurement and the difficulty in extracting long- term features to map. In this project we are concerned with robot navigation in large outdoor environments, for that we propose to build several size limited local maps and combine them into a global map using an 'history memory' which accumulates sensory evidence over time to identify places with a stochastic model of the correlation between map features. In our implementation, the dynamic model of the camera takes into account that the camera is on the top of a mobile robot which moves on a ground-plane. The SIFT algorithm [6] is used for features detection. The data from GPS, if available, are used to help localizing the robot and features in satellite images. While the data from the inertial sensor and the wheel encoders are introduced in the vehicle modeling. II. SYSTEM MODELING AND FEATURE EXTRACTION In our application, a camera is fixed on the top of a mobile car-like robot "ROBUDEM" (figure 1). The vehicle travels through the environment using its sensors to observe features around it. A world coordinate frame is defined such that its and axes lie in the ground plane, and its axis point vertically upwards. Fig 1: The used robot in the VIEW-FINDER project. The system state vector of the stereo-camera in this case is defined with the 3D position vector of the head center in the world frame coordinates and the robot's orientations roll, pitch and yaw about the Z, X, and Y axes, respectively : The dynamic model or motion model is the relationship between the robot's previous state, , and its current state, , given a control input (1) where is a function representing the mobility, kinematics and dynamics of the robot (transition function) and is a random vector describing the unmodelled aspects of the vehicle (process noise such as wheel sleep or odometry error). The system dynamic model in our case, considering the control as identity, is given by: