Adaptive Vehicle Detection for Real-time Autonomous Driving System Maryam Hemmati Dept. of Electrical & Computer Engineering University of Auckland Auckland, New Zealand m.hemmati@auckland.ac.nz Morteza Biglari-Abhari Dept. of Electrical & Computer Engineering University of Auckland Auckland, New Zealand m.abhari@auckland.ac.nz Smail Niar LAMIH/CNRS Polytechnic University Hauts-de-France Valenciennes, France smail.niar@uphf.fr Abstract—Modern cars are being equipped with powerful computational resources for autonomous driving systems (ADS) as one of their major parts to provide safer travels on roads. High accuracy and real-time requirements of ADS are addressed by HW/SW co-design methodology which helps in offloading the computationally intensive tasks to the hardware part. However, the limited hardware resources could be a limiting factor in complicated systems. This paper presents a dynamically reconfig- urable system for ADS which is capable of real-time vehicle and pedestrian detection. Our approach employs different methods of vehicle detection in different lighting conditions to achieve better results. A novel deep learning method is presented for detection of vehicles in the dark condition where the road light is very limited or unavailable. We present a partial reconfiguration (PR) controller which accelerates the reconfiguration process on Zynq SoC for seamless detection in real-time applications. By partially reconfiguring the vehicle detection block on Zynq SoC, resource requirements is maintained low enough to allow for the existence of other functionalities of ADS on hardware which could complete their tasks without any interruption. Our presented system is capable of detecting pedestrian and vehicles in different lighting conditions at the rate of 50fps (frames per second) for HDTV (1080x1920) frame. I. I NTRODUCTION Autonomous driving systems (ADS) are getting more popu- lar as they provide more accurate detection and consequently, become more reliable. Robust and reliable detection requires several different circumstances to be taken into account within the detection algorithm, which results in more intensive com- putations. Considering the stringent real-time requirement of such systems, dedicated hardware accelerators with parallel and pipelined architecture seem to be an inevitable option. However, resource limitations in the implementation of parallel architectures could become an additional bottleneck during the system level design of ADS. On the other hand, many of the features available on these sophisticated systems are not employed all the time and in all different driving environments. As an instance, animal detection on the road could be a useful feature for ADS since, in some countryside roads, animals might appear and cross the road. However, this feature might not be used in most of the times when the driving area is limited to urban roads. Moreover, even driving in the same area does not guarantee the effectiveness of a unique algorithm, as the lighting condition is dynamically changing and is affecting the quality of the captured image through vision cameras. In these situations, a system with dynamic capabilities could be an advantage to maintain a rich set of features and overcome the resource constraints of the system. We present a real-time adaptive detection system which partially reconfigures the vehicle detection module and em- ploys the most suitable detection algorithm for various en- vironmental conditions. Our system uses a machine learning approach and consists of a static part as well as a dynamically reconfigurable part. The static part which includes data capture and pedestrian detection continues its operation during the reconfiguration intervals and guarantees the real-time and safe behavior of the system which is very important in safety- critical systems such as ADS. The reconfigurable part is designed to detect the vehicle with the choice of three dif- ferent algorithms, each suitable for a particular environmental lighting condition. An external signal which indicates the light intensity changes is considered to trigger the reconfiguration of the hardware accelerator on programmable logic (PL) which is accomplished by the processing system (PS) on Zynq SoC. Our contribution has two folds; first, we present a novel method for detection of vehicles in very dark environments which uses deep belief networks (DBN) to detect the presence of taillights in thresholded image. A selection of detected tail- lights based on their obtained size features and their distance are fed to a support vector machine (SVM) classifier to localize the vehicle. Second, we present a PR reconfiguration controller on the Zynq PL that transfers the partial bit frames from the PL DDR through the DMA engine to avoid any delay which is usually generated by the PS interconnect system during the reconfiguration process. It results in the speed up of more than 2.6 times for the reconfiguration throughput [1]. The rest of the paper is organized as follows. A review of state of the art driver assistance system and vehicle detection algorithms is provided in Section II. Section III presents the explanation of our method for vehicle detection in different conditions followed by the implementation of hardware ac- celerators. System-level implementation and reconfiguration process, as well as the static partition, and reconfigurable partition on FPGA are explained in Section IV. Concluding remarks are discussed in Section V. 1028 978-3-9819263-2-3/DATE19/ c 2019 EDAA