©2010 International Journal of Computer Applications (0975 - 8887)
Volume 1 – No. 26
22
Sensor Fusion of Laser & Stereo Vision Camera for
Depth Estimation and Obstacle Avoidance
Saurav Kumar Daya Gupta Sakshi Yadav
Computer Engg. Deptt. HOD, Computer Engg. Deptt. Student, Electrical & Electronics
Delhi Technological University Delhi Technological University Delhi Technological University
ABSTRACT
Laser Range Finders (LRF) have been widely used in the field
of robotics to generate very accurate 2-D maps of
environment perceived by Autonomous Mobile Robot. Stereo
Vision devices on the other hand provide 3-D view of the
surroundings with a range far much than of a LRF but at the
tradeoff of accuracy. This paper demonstrates a technique of
sensor fusion of information obtained from LRF and
Stereovision camera systems to extract the accuracy and range
of independents systems respectively. Pruning of the 3D point
cloud obtained by the Stereo Vision Camera is done to
achieve computational efficiency in real time environment,
after which the point cloud model is scaled down to a 2-D
vision map, to further reduce computational costs. The 2D
map of the camera is fused with the 2D cost map of the LRF
to generate a 2-D navigation map of the surroundings which
in turn is passed as an occupancy grid to VFH+ for obstacle
avoidance and path-planning. This technique has been
successfully tested on „Lakshya‟- an IGV platform developed
at Delhi College of Engineering in outdoor environments.
Keywords
Sensor fusion, Stereovision, Laser range finder, Obstacle
avoidance, Navigation map, 3D point cloud and Robotics
1. INTRODUCTION
There are a large number of sensors available which can be
used to detect obstacles present in the immediate
surroundings, for eg. sensors like sonar, lasers, stereo vision
camera, etc are widely used for obstacle detection. Each
sensor works in a different manner and has its own limitation
and advantages. Due to its inherent limitations, a single sensor
cannot give an accurate reconstruction of the surroundings
and hence cannot be used by mobile robots for obstacle
detection and accurate path planning. This gives rise to the
concept of sensor fusion i.e. integration of data from different
sensors for successful obstacle avoidance and path planning.
Distance sensors like laser range finders have been used
before reconstruction of real world surroundings of a robot
[1]. They give very accurate and reliable output, but in case of
obstacles like a chair or table or obstacles not lying in the
plane of the laser, they fail to detect the whole obstacle. Also,
laser data is very much affected by the pitch and roll of the
vehicle.
On the other hand, stereo vision camera is involved in the
acquisition of images of the dynamic environment. Though it
can perceive up to infinity, its field of view is narrower as
compared to that of LRF‟s. Also, if only the camera system is
used for obstacle detection the data obtained is inferior in
quality and it increases the computation burden on the system.
Sensor fusion with Laser and Camera has been
accomplished before in [2] but the method focuses on
generating 3-D maps of 2D Laser maps and then fusing it with
stereovision 3D map, which adds to computational burden. [3]
deals with long range obstacle detection on road for which
laser range finder detects and tracks the obstacle and
stereovision camera system reconfirms the laser data. Sensor
fusion of sensors like stereovision and lidar systems have
been used widely for autonomous vehicles [4] [5].
In this paper, we propose an algorithm which relies on the
fusion of the 2D cost maps generated by laser data with the
2D cost maps generated from the 3D real world map by stereo
vision camera systems, to create an Occupancy grid for
obstacle detection and trajectory planning. The fusion of both
is a challenging task but the output is commendable and quite
efficient to make a system move autonomously in a complex,
dynamic environment with safe path planning and obstacle
collision avoidance.
Section II deals with range sensors- Hokuyo Laser Scanner
and BumbleBee StereoVision Camera and the generation of
their respective 2D cost maps. Section III deals sensor data
fusion to generate an occupancy grid map and subsequent
path planning. IV section is about the Lakshya‟s mechanical
design and our results obtained from experiments performed
on „Lakshya‟. The paper is concluded in section V by
discussing future works and applications in this field.
2. RANGE SENSORS
A BumbleBee StereoVision camera by Point Grey Research,
with two Sony 1/3” progressive scan CCDs and a resolution
of 640x480 at 48FPS or 1024x768 at 18FPS, was also used in
conjugation with the Hokuyo laser, on the Lakshya platform
for stereo imaging of surroundings.
Fig. 1 Image Sensing
The LRF used in experimentation was Hokuyo‟s URG-04LX
which has a range of 20mm to 4m. It has a 240
o
scanning area
with 0.36
o
angular resolution. Laser beams strike off an object
to determine its distance and direction. The scanning time is
around 100msec/scan. Based on the position of objects around
the robot a 2D map is generated.