Intelligent Decision Technologies 15 (2021) 291–304 291 DOI 10.3233/IDT-200106 IOS Press Lidar and radar fusion for real-time road-objects detection and tracking Wael Farag a,b a College of Engineering and Technology, American University of the Middle East, Kuwait b Electrical Engineering Department, Cairo University, Egypt E-mail: wael.farag@aum.edu.kw Abstract. In this paper, based on the fusion of Lidar and Radar measurement data, a real-time road-Object Detection and Tracking (LR_ODT) method for autonomous driving is proposed. The lidar and radar devices are installed on the ego car, and a customized Unscented Kalman Filter (UKF) is used for their data fusion. Lidars are accurate in determining objects’ positions but significantly less accurate on measuring their velocities. However, Radars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. Therefore, the merits of both sensors are combined using the proposed fusion approach to provide both pose and velocity data for objects moving in roads precisely. The Grid-Based Density-Based Spatial Clustering of Applications with Noise (GB-DBSCAN) clustering algorithm is used to detect objects and estimate their centroids from the lidar and radar raw data. Then, the estimation of the object’s velocity as well as determining its corresponding geometrical shape is performed by the RANdom SAmple Consensus (RANSAC) algorithm. The proposed technique is implemented using the high-performance language C++ and utilizes highly optimized math and optimization libraries for best real-time performance. The performance of the UKF fusion is compared to that of the Extended Kalman Filter fusion (EKF) showing its superiority. Simulation studies have been carried out to evaluate the performance of the LR_ODT for tracking bicycles, cars, and pedestrians. Keywords: Sensor fusion, Kalman filter, object detection, object tracking, ADAS, autonomous driving 1. Introduction Improving safety, lowering road accidents, boost- ing energy efficiency, enhancing comfort, and enrich- ing driving-experience are the most important driv- ing forces behind equipping present-day cars with Ad- vanced Driving Assistance Systems (ADAS) [1,2]. Many ADAS functions represent incremental steps to- ward a hypothetical future of safe fully autonomous cars [3–12]. A critical component of the various ADAS features that are also highly required in autonomous cars is the recognition and accurate assessment of the surround- ings [58]. This component depends on data observed from sensors mounted on the ego car [9]. If there is an object close by, it is of interest to know where that object is, what the object’s velocity is, and if the object can be described by a plain geometric shape [12]. Lidar and radar are ones of the sought-after sensors for ex- ploiting in ADAS and autonomous-car features [13]. A lidar always returns many concentrated detection points (point-cloud) that describe each detected object [14,15]. Likewise, a radar often returns multiple detections per target but not as dense as a lidar [16]. This means that it is necessary to group detections originating from the same target, i.e. to cluster the detections, to obtain in- formation about the surroundings [16,17]. The ego car equipped with a lidar and radar receives a collection of raw data of sensors measurements that include information of detected road objects. Then, the proposed LR_ODT method employs a two-step ap- proach to find and identify these road objects within the received data. The first step is to coarse cluster the lidar/radar raw data separately to detect objects within using the Grid-Based Density-Based Spatial Cluster- ing of Applications with Noise (GB-DBSCAN) algo- ISSN 1872-4981/$35.00 c 2021 – IOS Press. All rights reserved. AUTHOR COPY