Local Navigation in Rough Terrain using Omnidirectional Height Max Schwarz, max.schwarz@uni-bonn.de Sven Behnke, behnke@cs.uni-bonn.de Rheinische Friedrich-Wilhelms-Universit¨ at Bonn Computer Science Institute VI, Autonomous Intelligent Systems Friedrich-Ebert-Allee 144, 53113 Bonn Abstract Terrain perception is essential for navigation planning in rough terrain. In this paper, we propose to generate robot- centered 2D drivability maps from eight RGB-D sensors measuring the 3D geometry of the terrain 360 ◦ around the robot. From a 2.5D egocentric height map, we assess drivability based on local height differences on multiple scales. The maps are then used for local navigation planning and precise trajectory rollouts. We evaluated our approach during the DLR SpaceBot Cup competition, where our robot successfully navigated through a challenging arena, and in systematic lab experiments. 1 Introduction Most mobile robots operate on flat surfaces, where obsta- cles can be perceived with horizontal laser-range finders. As soon as robots are required to operate in rough ter- rain, locomotion and navigation becomes rather difficult. In addition to absolute obstacles, which the robot must avoid at all costs, the ground is uneven and contains ob- stacles with gradual cost, which may be hard or risky to overcome, but can be traversed if necessary. To find traversable and cost-efficient paths, the perception of the terrain between the robot and its navigation goal is essential. As the direct path might be blocked, an omni- directional terrain perception is desirable. To this end, we equipped our robot, shown in Fig. 1, with eight RGB-D cameras for measuring 3D geometry and color in all di- rections around the robot simultaneously. The high data rate of the cameras constitutes a computational challenge, though. In this paper, we propose efficient methods for assess- ing drivability based on the measured 3D terrain geome- try. We aggregate omnidirectional depth measurements to robot-centric 2.5D omnidirectional height maps and compute navigation costs based on height differences on multiple scales. The resulting 2D local drivability map is used to plan cost-optimal paths to waypoints, which are provided by an allocentric terrain mapping and path planning method that relies on the measurements of the 3D laser scanner of our robot [10]. We evaluated the proposed local navigation approach in the DLR SpaceBot Cup—a robot competition hosted by the German Aerospace Center (DLR). We also conducted systematic experiments in our lab, in order to illustrate the properties of our approach. Figure 1: Explorer robot for mobile manipulation in rough terrain. The sensor head consists of a 3D laser scanner, eight RGB-D cameras, and three HD cameras. 2 Related Work Judging traversability of terrain and avoiding obstacles with robots—especially planetary rovers—has been in- vestigated before. Chhaniyara et al. [1] provide a detailed survey of different sensor types and soil characterization methods. Most commonly employed are LIDAR sensors, e.g. [2, 3], which combine wide depth range with high angular resolution. Chhaniyara et al. investigate LIDAR systems and conclude that they offer higher measurement density than stereo vision, but do not allow terrain clas- sification based on color. Our RGB-D terrain sensor pro- vides high-resolution combined depth and color measure- ments at high rates in all directions around the robot. Further LIDAR-based approaches include Kelly et al. [6], who use a combination of LIDARs on the ground vehi- cle and an airborne LIDAR on a companion aerial ve- hicle. The acquired 3D point cloud data is aggregated into a 2.5D height map for navigation, and obstacle cell costs are computed proportional to the estimated local slope, similar to our approach. The additional viewpoint from the arial vehicle was found to be a great advan- Joint 45th International Symposium on Robotics (ISR) and 8th German Conference on Robotics (ROBOTIK), Munich, June 2014.