©The 2023 International Conference on Artificial Life and Robotics (ICAROB2023), Feb. 9 to 12, on line, Oita, Japan
Modern Methods of Map Construction Using Optical Sensors Fusion
Ramil Safin, Tatyana Tsoy, Roman Lavrenov, Ilya Afanasyev
Department of Intelligent Robotics, Kazan Federal University, Kazan, Russia
Evgeni Magid
Department of Intelligent Robotics, Kazan Federal University, Kazan, Russia
Higher School of Economics University, Moscow, Russia
E-mail: safin.ramil@it.kfu.ru
Abstract
Map construction, or mapping, plays an important role in robotic applications. Mapping relies on inherently noisy
sensor measurements to construct an accurate representation of a surrounding environment. Generally, individual
sensors suffer from performance degradation issues under certain conditions in the environment. Sensor fusion
allows to obtain statistically more accurate perception and to cope with performance degradation issues by
combining data from multiple sensors of different modalities. This article reviews modern sensor fusion methods
for map construction applications based on optical sensors, such as cameras and laser range finders. State-of-the-art
mapping solutions built upon different mathematical theories and concepts, such as machine learning, are
considered.
Keywords: Sensor Fusion, Mapping, SLAM, Machine Learning, Camera, LiDAR
1. Introduction
Mapping is the process of constructing a map of an
environment using robot perception. There exist
multiple map representations, such as sparse point
clouds, topological maps, and dense voxel grids.
Mapping could be difficult due to adverse conditions
(e.g., low lightning) and presence of dynamic objects.
Each type of sensor has its own limitations. For example,
cameras underperform in low-light environments, while
laser scanners cannot provide high-resolution data. To
obtain reliable maps, it is required to combine strengths
of each sensor to cope with their weaknesses. Sensor
fusion, or data fusion, is a technique for combining data
from multiple sensors in a way that allows to obtain
more reliable and accurate information about the system
being measured (Fig. 1). Data fusion is used in many
robotics and machine vision applications, such as
autonomous navigation and localization of mobile
robots[1].
This article reviews modern sensor fusion methods for
map construction applications based on optical sensors,
such as cameras and laser scanners. State-of-the-art
mapping solutions built upon different mathematical
theories and concepts, such as machine learning, are
considered.
Fig. 1. An illustration of a laser range finder and camera
sensor fusion. Uncertainty of the state is reduced due to
multiple sources of information.
166