IAES International Journal of Robotics and Automation (IJRA) Vol. 12, No. 1, March 2023, pp. 8497 ISSN: 2722-2586, DOI: 10.11591/ijra.v12i1.pp84-97 84 Model-based and machine learning-based high-level controller for autonomous vehicle navigation: lane centering and obstacles avoidance Marcone Ferreira Santos 1 , Alessandro Corrˆ ea Victorino 2 , Hugo Pousseur 2 1 Department of Mechanical Engineering, Federal University of Minas Gerais, Belo Horizonte, Brazil 2 CNRS, Heudiasyc (Heuristics and Diagnosis of Complex Systems), Universit´ e de Technologie de Compi` egne, Compi` egne, France Article Info Article history: Received Sep 29, 2021 Revised Oct 3, 2022 Accepted Nov 18, 2022 Keywords: Autonomous car Autonomous navigation systems Intelligent vehicles ABSTRACT Researchers have been attempting to make the car drive autonomously. Environ- ment perception, together with safe guidance and control, is an important task and is one of the big challenges when developing this kind of system. Geometri- cal or physical-based models, machine learning-based models, and those based on a mixture of both models are the three types of navigation methods used to resolve this problem. The last method takes advantage of the learning capability of machine learning models and uses the safeness of geometric models in order to better perform the navigation task. This paper presents a hybrid autonomous navigation methodology, which takes advantage of the learning capability of ma- chine learning and uses the safeness of the dynamic window approach geometric method. Using a single camera and a 2D lidar sensor, this method actuates as a high-level controller, where optimal vehicle velocities are found, then applied by a low-level controller. The final algorithm is validated in the CARLA Sim- ulator environment, where the system proved to be capable to guide the vehicle in order to achieve the following tasks: lane keeping and obstacle avoidance. This is an open access article under the CC BY-SA license. Corresponding Author: Marcone Ferreira Santos Department of Mechanical Engineering, Federal University of Minas Gerais Belo Horizonte, Brazil Email: marconefs@ufmg.br 1. INTRODUCTION The first car, named Navlab, coupled with computer vision and a smart steering system, emerged in the 1980s at Carnegie Mellon University [1]. Since then, several attempts have been made to make fully autonomous vehicles become safer, more efficient, and environmentally responsible. One of the most advanced smart embedded systems nowadays is found in [2], where a self-driving car has driven more than sixteen million kilometers autonomously. The main tasks when developing an autonomous vehicle system are summarized as follows: environ- ment perception, mapping and localization, motion planning, decision, and control. Through images captured by one or more cameras, lidar, and other useful sensors, the perception task is designed to detect and understand the local environment where the vehicle is driving. Some studies comprising the perception subject are found in [3], [5]. These tasks are carried out through modules presented in embedded smart systems equipped in an autonomous vehicle and are developed using scientific methodologies based on ”model-based”, ”machine learning based” or ”hybrid-based” control methods. Journal homepage: http://ijra.iaescore.com