UNMC-VIER AutoVision Database
King Hann LIM, Anh Cat LE NGO, Kah Phooi SENG, Li-Minn ANG
Faculty of Engineering
The University of Nottingham, Malaysia Campus
Jalan Broga, 43500 Semenyih, Selangor Darul Ehsan.
{keyx7khl, kezklma, kezkps}@nottingham.edu.my
Abstract — Designing a driver assistance system has become
a trend in the automotive technology to improve the security
and efficiency of driving. However, there is no standard on-
road database to verify the performance and effectiveness of
algorithms. In this paper, an automotive vision database is
created to assist researchers analyzing their designed
algorithm in a more convincing way. The UNMC-VIER
AutoVision database composes of a series of single view
videos embodying the information of traffic signs, vehicles
and single/multiple lanes. In addition, multi-views videos,
with the aid of three cameras are included in the database
providing more visual information in the panoramic view of
traffic scene for analysis. The standard setup and
calibration of the database is discussed in the paper. Some
applications are discussed along with the use of the
database.
Keywords Automotive; database; traffic sign; lane
detection; vehicle detection.
I. INTRODUCTION
Driver Assistance Systems (DASs), with the aid of
optical sensors have recently gained much attention in the
automotive technology in order to achieve autonomous
driving. These systems which involve partial human
interaction, are able to automatically sense nearby
conditions (eg. vehicle overtaking, the presence of traffic
signs etc.), understand physical road environment and
immediately trigger a warning system to alert a driver on
possible collision, traffic rules violation, and other worse
case scenarios [1]. Although these systems have been
under intensive research for last two decades, the
performance of DASs is still degrading sharply when it
encounters unexpected road circumstances. Unlike to the
most of the intelligent visual systems, DASs are mainly
operated under various real-time outdoor environments.
The real-time scenes have posed a lot of challenges such
as rapid change of illumination, variability of weather,
existence of arbitrary objects on roads etc. Moreover, the
performance for individual algorithm is not comparable to
other algorithms due to the disparity of the automotive
database. This may lead to unconvincing situation of
creating a new algorithm for DASs.
The Vision and Autonomous System Center’s Image
(VASC) Database, created by Carnegie Mellon University
(CMU) [2] obtains some road image sequences taken from
the Navlab series of vehicles. It involves several variable
factors such as day or night road scenes, sunny or rainy
weathers, and shadowy lanes. They mainly develop the
database for the lane detection and road following
algorithm. Some car samples [3] are provided by CMU for
evaluating car detection algorithms. Additional real-time
road pictures can be obtained from AARoads [4]. Besides,
there is an existing traffic signs database [5] containing 48
images of size 360×270 in PNG format. It is designed to
test for three classes, i.e. pedestrian crossing, compulsory
for bikes and intersection sign where each class contains
16 images. More traffic signs are available in [6] for
standard US highway signs. However, none of them
completely presents the road environment. Therefore, a
standard road database is important for evaluating a new
designed algorithm compared with the existing DAS
algorithms.
To ease the process of comparing results, a real-time
automotive vision database has been created in this paper.
The Visual Information Engineering Research (VIER)
group from the University of Nottingham Malaysia
Campus (UNMC) has created an AutoVision database
comprising of a series of traffic scene videos. The content
of the single view videos embodies the information of
traffic sign, vehicles and single/multiple lanes. At the
same time, multi-view of traffic scene, with the fusion of
three optical sensors, provides more on-road visual
information for researchers to further investigate the road
conditions. In the video content, the database contains
variability of weather, lane conditions and various road
sceneries. This database will help developing the DASs
for algorithm evaluation and performance improvement.
In general, the UNMC-VIER AutoVision database has
following advantages: low-cost, extendable,
comprehensive traffic scenes containing vehicles, traffic
signs and lanes in single/multi-view, and video-based
samples.
This paper is organized as: Section 2 demonstrates
some prerequisite equipment to create an automotive
vision database. It is followed by the video camera
calibration techniques of multi-view database creation.
Section 4 describes the content of UNMC-VIER
AutoVision database. Some examples are presented along
with the use of the database in section 6 and followed by
conclusion and future works.
II. PREREQUISITE EQUIPMENTS
The importance of having a real-time road database has
motivated the creation of database in this paper. To
capture the real-time traffic scene on road, a vehicle is a
necessary for the database creation. A hatchback vehicle –
Perodua Kelisa with 5 doors 989 cubic capacities (c.c.) is
occupied during the whole database acquisition as can be
seen in Fig. 1(a). The height of the car is 142 cm apart
from the ground to the rooftop. The width of the car is 149
cm and the length is 349cm [13].
During database recording, three camcorders are
utilized in order to generate a panoramic view of traffic
scene. Three camcorders consist of one Panasonic SDR-
S7 camcorder and two Panasonic SDR-SW20 camcorders.
2010 International Conference on Computer Applications and Industrial Electronics (ICCAIE 2010), December 5-7, 2010, Kuala Lumpur, Malaysia
978-1-4244-9055-4/10/$26.00 ©2010 IEEE 650