Abstract—This paper addresses the problem of finding the
host vehicle’s lateral position on a multi-lane road, using
information obtained by processing video sequences. A very
important cue for lane identification is the class of the
boundaries of the current lane. This paper presents a reliable
solution for lane boundary type identification, based on
frequency analysis of the gray level profile of these boundaries,
assuming that the current lane is already detected. The lane
boundary information is combined with the obstacle
information, through a Bayesian Network which will output,
frame by frame, the probability of the vehicle to be positioned
on each lane of the road. The probability result will be
propagated throughout the sequence by a Particle Filter.
I. INTRODUCTION
dvanced Driving Assistance Systems can significantly
improve the driving experience, while also increasing
the overall traffic safety. An important prerequisite for any
ADAS action is the proper assessment of the situation of the
host vehicle and of the surrounding traffic. Part of this
situation assessment is the knowledge about the host vehicle’s
position on the road. There are several systems that can help
us to gain this knowledge: satellite navigation systems can
provide a rough position estimate, inertial systems can fill in
the gaps of satellite positioning (and filter the estimates), and
lane detection systems can tell us the position within a lane.
With the help of a map, we can infer an approximate position
on the road, or at least we can tell we are on the side of the
road corresponding to our direction of driving. Unfortunately,
when we have multiple lanes for a driving direction, the
problem is not that simple: the navigation systems are not
precise enough to tell us on what lane we are, the lane
detection systems may not detect all lanes, due to occlusions
from other vehicles, and the direction of our driving does not
help. In the literature, there exist several approaches for
accurately positioning the host vehicle on the road, and
estimating the lane on which the host vehicle is travelling on.
Manuscript received January 27, 2012. This paper was supported by the
project "Doctoral studies in engineering sciences for developing the knowledge
based society-SIDOC” contract no. POSDRU/88/1.5/S/60078, project co-
funded from European Social Fund through Sectorial Operational Program
Human Resources 2007-2013, and by the and by the POSDRU-EXCEL post-
doctoral program, financing contract POSDRU/89/1.5/S/62557.
Voichita Popescu, Radu Danescu and Sergiu Nedevschi are with the
Technical University of Cluj-Napoca, Computer Science Department (e-mail:
{firstname.lastname}@cs.utcluj.ro). Department address: Computer Science
Department, Str. Memorandumului nr. 28, Cluj-Napoca, Romania. Phone: +40
264 401484.
Generally, the solutions are based on GPS localization which
is then enhanced by additional vehicle and/or on-board
sensors such as inertial navigation systems, odometers, vision
sensors, inter-vehicle communication systems, digital maps.
Different vision enhanced lane level positioning systems are
proposed in [1], [2], [3], [4]; these methods also use detailed
digital maps of the environment. A method for lane level
positioning based on inter-vehicle communication is
presented in [5].
This paper proposed an original solution for lane level
positing on a multi-lane road, based on an on-board
stereovision processing system and an extended digital map
[6]. The contributions of this paper are a novel method for
lane boundaries classification and an original method based
on a Bayesian Network (BN) for lane estimation. The
network is used for correlating the visual information with the
map information; additionally, the information about other
vehicles is used in the network for lane estimation. The frame
by frame results are tracked using a particle filter in order to
take into consideration the time evolving nature of the
problem. In this approach, roads with three to six lanes per
driving direction are considered. The solution is designed for
structured roads (roads with marked lane boundaries). Fig. 1
illustrates the overview of the proposed solution for on-road
position estimation.
The system contains four functional blocks. The first block
delivers 3D information through stereo image processing. The
second block consists of the tracking and classification of the
obstacles and of the lane boundaries; this block provides the
evidence for the third block in the architecture. The third
block is the probabilistic reasoning block; it performs frame
by frame reasoning using a Bayesian network [7], [8]
approach. The forth block performs a temporal filtering
(tracking) of the instantaneous beliefs provided by the static
On-Road Position Estimation by Probabilistic Integration of Visual
Cues
Voichita Popescu, Radu Danescu, Sergiu Nedevschi
A
Fig. 1 Solution overview for on-road position estimation
2012 Intelligent Vehicles Symposium
Alcalá de Henares, Spain, June 3-7, 2012
978-1-4673-2118-1/$31.00 ©2012 IEEE 583