978-1-4244-2794-9/09/$25.00 ©2009 IEEE SMC 2009
Depth Map Estimation Using Exponentially Decaying
Focus Measure Based on Susan Operator
Pankajkumar Mendapara, Rashid Minhas, Q.M. Jonathan Wu
Department of Electrical Engineering
University of Windsor
Windsor, Canada
{mendapa, minhasr, jwu}@ uwindsor.ca
Abstract— This paper presents a novel technique for depth map
estimation using a sequence of images acquired at varying focus.
In depth map estimation noise, illumination variations and types
of extracted features significantly affect the performance of a
focus measure. This paper proposes the use of SUSAN operator,
to extract features, because of its structure preserving noise
filtering which plays a pivotal role in depth estimation of a scene.
We introduce a new focus measure based on exponentially
decaying function to use neighborhood information of an
extracted feature point that assigns more weight to the closer
pixel points. Experiments validate superior performance of our
proposed algorithm in comparison to other well-documented
methods.
Keywords—focus measure, 3D shape recovery, shape from
focus, exponentially decaying function, multi-focus imaging.
I. INTRODUCTION
The technique utilized to retrieve spatial information from a
sequence of images with varying focus plane is termed as shape
from focus (SFF). In SFF, a sequence of images (SI) is
acquired at varying relative distance between a camera lens and
a scene object. Such a sequence captures well focused partial
information of a scene in different images. To reconstruct a
well focused image, SI acquired with varying distances is
processed to extract focused points from individual image
frames. Traditional SFF techniques assume convex shaped
objects for accurate depth map estimation. SFF removes the
inherent limitation of traditional image acquisition for its
inability to capture details of a scene with a considerably large
depth.
The objective of depth map estimation is to determine the
depth of every object point with respect to the camera. For
scenes with considerably large depth, object points present on a
focus plane appear sharp in an acquired image whereas blur of
imaged points increases as they move away from the focus
plane.
Basic image formation geometry when camera parameters
are known is shown in Fig. 1. Distance of an object from
camera lens i.e. u is required for exact 3D reconstruction of a
scene. Depth of a scene, distance of an object from lens,
illumination conditions, camera movement, aberration effects
in lens and movement in a scene can severely affect the depth
map estimation. Computing distance of an object from a
camera lens is simple if blur circle radius R is equal to zero. If
image detector (ID) is placed at an exact distance v; sharp
focused image P of an object point P is formed. Relationship
between object distance u, focal distance of lens f and ID
distance v is given by Gaussian lens law.
v u f 1 1 1
Figure 1. Image formation geometry of a 3D object
In literature [1-6,14,15] commonly used operators in SFF
are sum of modified Laplacian (FM
SML
), Tenengrade focus
measure (FM
T
), gray level variance focus measure (FM
GLV
),
curvature focus measure (FM
C
), M2 focus measure (FM
M2
),
point focus measure (FM
P
) and steerable filters based focus
measure (FM
SF
). Approximation and learning based focus
measures have also been proposed [7-9] that utilize neural
network, neuro fuzzy systems and dynamic programming
based approaches for accurate depth map estimation.
Approximation based techniques use any of the conventional
aforementioned focus measures for pre-processing whereas
comprehensive rule base and appropriate selection of training
data restrict their application to specific domains.
In this paper a new scheme is proposed to estimate depth
map by searching the frame number for the best focused object
points. Most of the established focus measure operators for SFF
work well for regions with dense texture only. Hence their
degraded performance is observed in presence of noise, poor
texture and singularities along curves.
Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics
San Antonio, TX, USA - October 2009
978-1-4244-2794-9/09/$25.00 ©2009 IEEE
3805