Liborg: a lidar-based Robot for Efficient 3D Mapping Michiel Vlaminck a , Hiep Luong a , and Wilfried Philips a a Image Processing and Interpretation (IPI), Ghent University, imec, Ghent, Belgium ABSTRACT In this work we present Liborg, a spatial mapping and localization system that is able to acquire 3D models on the fly using data originated from lidar sensors. The novelty of this work is in the highly efficient way we deal with the tremendous amount of data to guarantee fast execution times while preserving sufficiently high accuracy. The proposed solution is based on a multi-resolution technique based on octrees. The paper discusses and evaluates the main benefits of our approach including its efficiency regarding building and updating the map and its compactness regarding compressing the map. In addition, the paper presents a working prototype consisting of a robot equipped with a Velodyne Lidar Puck (VLP-16) and controlled by a Raspberry Pi serving as an independent acquisition platform. Keywords: 3D mapping, ICP, lidar, multi-resolution, octree 1. INTRODUCTION Many of today’s applications require accurate and fast 3D reconstructions of large-scale environments such as industrial plants, critical infrastructure (bridges, roads, dams, tunnels), public buildings, etc. These 3D reconstructions allow for further analysis of the scene, e.g. to detect wear or damages on the road surface or in tunnels. The 3D models can also be used to organize or monitor events in conference venues or in other event halls. Finally, in the domain of intelligent vehicles, 3D maps of the environment can facilitate autonomous driving. Unfortunately, the current process of 3D mapping is still an expensive and time-consuming process as it is often done using static laser scanning, hence needing a lot of different viewpoints and a lot of manual intervention and tuning. Often times it is also difficult to map the entire area in detail; there are always parts that are too difficult to reach. Motivated by these shortcomings, we present our Liborg platform, a spatial mapping and localization system that is able to acquire 3D models on the fly by using lidar data. By means of a prototype, we built our own four-wheel robot consisting of a Velodyne Lidar Puck (VLP-16) controlled by a Raspberry Pi to serve as an independent acquisition platform. The robot is able to drive autonomously but can also be controlled by a remote control. Currently, the data is streamed to a server where the processing is done. In the future we plan to integrate a nVidia Jetson TX1 on the robot in order to be able to do the processing on-board. Figure 1 shows two images of our Liborg robot with the scanner being mounted using different tilt angles. In previous work, 1 a mobile mapping system was presented that operates online and gives autonomous vehicles the ability to map their surroundings. In that work, a Velodyne HDL-32e lidar scanner was used to capture the environment. The focus was mainly on the lidar odometry and no attention was given to the map data structure itself. This work will extend the former by keeping a compact global map of the environment in memory that will be continuously updated by fusing newly acquired point clouds. This fusion will improve the map by reducing noise and correcting small errors made during the pose estimation. It will also help to estimate future poses more accurately. In order to guarantee fast execution times, we propose to organize the map as a hierarchical octree that serves as a compact representation of the environment. This paper will explain how the octree-based map can be exploited to speed up the estimation of the current pose of the robot without sacrificing accuracy. In addition, we will discuss how our solution is generic in the sense that no specific sensor set-up is needed. The sensor can thus be put in any orientation without any additional requirements. Also, no additional assumptions are made on the type of the environment. Finally, we conducted an experimental study using our Liborg robot and evaluated our system on both processing time and accuracy. Further author information: Michiel Vlaminck: E-mail: michiel.vlaminck@ugent.be