Integrated real-time vision-based preceding vehicle detection in urban roads Yanwen Chong a,n , Wu Chen b , Zhilin Li b , William H.K. Lam c , Chunhou Zheng d , Qingquan Li a a State Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan 430079, China b Department of Land Surveying and Geoinformatics, Hong Kong Polytechnic University, Kowloon, Hong Kong, China c Department of Civil and Structural Engineering, Hong Kong Polytechnic University, Kowloon, Hong Kong, China d College of Information and Communication Technology, Qufu Normal University, Rizhao, Shandong, China article info Available online 9 October 2012 Keywords: Feature extraction Shadow boundary Vehicle tracking Vehicle detection abstract This paper presents a solution algorithm for the real-time operation of vision-based preceding vehicle detection systems. The algorithm contains two main components: vehicle detection, and vehicle tracking. Vehicle detection is achieved by using vehicle shadow features to define a region of interest (ROI). The methods such as histogram equalization, ROI entropy and mean of edge image, are adopted to determine the exact vehicle rear box. In such way, vehicles can be detected in video images. In the vehicle tracking process, the predicted box is verified and updated; and certain important parameters such as relative distance or velocity, the number and type of the tracked vehicle are extracted. The proposed solution algorithm has been tested under different traffic conditions in Hong Kong urban areas. Test results demonstrate that the proposed solution algorithm has a good detection accuracy and satisfactory computational performance. & 2012 Elsevier B.V. All rights reserved. 1. Introduction Vision-based preceding vehicle detection systems have many applications. For example, they can be used to assist drivers in perceiving potential dangerous situations so as to avoid accidents through sensing and understanding the environment around the vehicles [17]. Currently, to monitor traffic conditions using video images at fixed locations has been commonly adopted [811]. The analysis of video sequences of traffic flow in a dynamic situation (i.e., installed on a moving vehicle) offers considerable improvements over the existing methods of traffic data collection and road traffic monitoring. By detecting vehicles in road net- works, real-time traffic parameters, such as the presence and number of vehicles, speed distribution data, turning traffic flows at intersections, queue lengths, space and time occupancy rates, can be acquired and analyzed. In autonomous vehicle guidance, knowledge of road geometry allows a vehicle to follow its route and the detection of road obstacles becomes a necessary and important task for avoiding collision with other vehicles [12]. Most visual vehicle detection systems follow two basic steps: Hypothesis Generation (HG), which hypothesizes the locations of the vehicles in images; and Hypothesis Verification (HV), which verifies the hypotheses of the vehicles’ locations [13]. Algorithms for vehicle detection based on computer vision can be classified into three groups: model-based, learning-based and feature-based meth- ods. The model-based method matches the vehicle candidates in images with various vehicle models stored in the computer. How- ever, the limitation of this method is the reliance on detailed geometric object models of all vehicles and it is unrealistic to build detailed models for all vehicles that could be found on the roadway [1416]. The learning-based method trains the system with some typical images, and the trained classifier is used to identify the test images [1722]. It is usually employed to confirm detection. That is, the trained classifier is utilized to confirm whether the extracted ROI is a vehicle or not. If the ROIs are not extracted by the detection algorithms, the whole image has to be scanned; as a result it would be very slow [23]. Feature-based methods to detect vehicles are trying to identify certain sub-features of the vehicles, such as dis- tinguishable points or lines, symmetry, edges, and shadows [2427]. The advantage of this approach is that some of the features of the moving object remain visible even in the presence of partial occlusion. Furthermore, the same algorithms can be used for detection in daylight, twilight or night-time conditions. It is self- regulating because it selects the most salient features under the given conditions. The main inconvenience is that if the features are not sufficiently present in the image, the vehicle is missed to detect. To develop a reliable system for preceding vehicle detection employing monocular vision is a difficult task as the view depth information is unavailable. Also, buildings and trees surrounding Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing 0925-2312/$ - see front matter & 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2011.11.036 n Corresponding author. E-mail addresses: apollobest@126.com (Y. Chong), lswuchen@inet.polyu.edu.hk (W. Chen), lslzli@inet.polyu.edu.hk (Z. Li), cehklam@polyu.edu.hk (W.H.K. Lam), zhengch99@126.com (C. Zheng), qqli@whu.edu.cn (Q. Li). Neurocomputing 116 (2013) 144–149