MULTISENSORDATA FUSION FOR AUTONOMOUS VEHICLE NAVIGATION IN RISKY ENVIRONMENTS G.L. Foresti, Senior Member, IEEE, and Carlo S. Regazzoni # , Senior Member, IEEE Department of Mathematics and Computer Science (DIMI) University of Udine, Via delle Scienze 208, 33100 Udine, Italy # Department of Biophysical and Electronic Engineering (DIBE) University of Genoa, Via all'Opera Pia 11A, I-16145 Genova, Italy Abstract This paper describes a multisensor data-fusion system for driving an autonomous earthwork vehicle operating in a sanitary landfill 1 . The system acquires data from a set of ultrasonic sensors, a laser range finder and several CCD cameras, and produces as output alarms that indicate potential dangerous situations, e.g., the presence of fixed or mobile obstacles in the vehicle working area. The proposed system adds to the vehicle important functionalities such as to avoid terrain holes or down slopes, or to discriminate between heaps of waste to be compacted and other man-made obstacles. Data fusion allows one to increase the system’s reliability and to compensate for the inaccuracies and limited operating ranges of individual sensors. Experimental results show the system’s functioning both under normal operational conditions and in the presence of dangerous situations. Moreover, the performances of the system in bad environmental situations (e.g., rain, low lighting) have been evaluated. Index Terms Autonomous vehicle driving, multisensor data fusion, feature extraction, outdoor environments, ultrasonic sensors, laser range finder. I. INTRODUCTION RISK detection is a basic task for several computer-based systems such as autonomous vehicles [1-4] and surveillance systems [5,6]. The possibility of a robot to interact with and operate in unstructured environments without the complete control of the human operator is strictly dependent on its capability to detect risky situations (e.g., fixed or mobile obstacles, holes or terrain slopes, etc.). A commonly accepted observation is that such complex tasks cannot be solved by using a single sensor, and that a synergetic use of multiple sensors is mandatory. The decreasing costs and the increasing performances of the sensors available today have made the aforesaid tasks feasible. Moreover, real-time processing and fusion of multisensor information are becoming possible thanks to the considerable progress made in the areas of distributed architectures and processing systems. The major advantage that a robot or an autonomous system can obtain by integrating information from multiple sensors is that such information is more accurate in terms of features that may be impossible to acquire by using just individual sensors. The use of multiple sensors 1 This research was partially founded by the CEE-ESPRIT project 6068 ATHENA (Advanced Teleoperation for eartHwork Equipment Navigation).