Stereoscopic Imaging through Turbid Media using
Couple of Microlens Array
David Abookasis and Joseph Rosen
Ben-Gurion University of the Negev
Department of Electrical and Computer Engineering
P. O. Box 653, Beer-Sheva 84105, Israel
Abstract: A new method for 3D imaging of hidden objects in a turbid media is experimentally tested.
Objects hidden between two biological tissues at different depths are recovered, and their 3D
locations are computed.
1. Introduction
Medical tomography techniques such as X-ray Computed Tomography (CT) [1] offer great advantages and
are still widely used despite the fact that they suffer from several drawbacks such as ionizing radiation, a
complex structure and high-cost. The advantage of optical tomography over other medical tomography
techniques is that they provide quantitative information on functional properties of tissues, while being non-
harmful (the radiation is non-ionizing). Accordingly, in the recent years researchers have invested
considerable effort towards developing optical tomography systems that use near-infrared (NIR) light. In
the present study we suggest a simple optical tomography technique that is based on speckled images.
Analogous to the fly's two eyes, two microlens arrays (MLA) are used to observe the hidden objects from
different perspectives. At the output of each lens array we construct the objects from several sets of many
speckled images by a technique previously suggested which uses a reference point [2]. The differences of
the reconstructed images on both arrays in respect to the reference point yield the information regarding the
relative depth between the various objects.
2. Fundamental concept
Figure 1 is a schematic diagram of the proposed 3D imaging system. The configuration consists of two
MLAs accompanied by imaging lenses, a pinhole (implemented by an adjustable iris) placed behind the
second scattering layer T
2
and conventional CCD cameras. Each path, left and right separately, is equivalent
to that given in Ref [2]. In the present setup the point-source is placed in front of the scattering medium, and
thus serves as a reference point instead of as a point-source of illumination. The idea behind this point
technique is to ascribe the location of an object to a location of some known point in space. The
computational process at each channel is as described extensively in Ref [2]. Briefly, in addition to the
speckled images of the object, we recorded speckled images of a pointlike object. After collecting all the
object’s speckled images by using the MLA, we used the point source to illuminate the setup, and speckled
patterns of this point source, through the same number of channels, were captured by a CCD. Each
subimage of the speckled object is placed side by side in the computer with a corresponding subimage of
the speckled pointlike source, and the two images are jointly Fourier transformed. The squared magnitudes
of the jointly transformed pictures are accumulated to compose a single average joint power spectrum.
Object reconstruction is achieved by another Fourier transform (FT) of this average spectrum. This process
yields three spatially-separated terms at the output of each path. One term is the zero-order at the vicinity of
the output plane origin. This term is equal to the sum of the pinhole autocorrelation and the object
autocorrelation. The other two terms correspond to the cross-correlation between the object and the pinhole
and thus, assuming the average pinhole image is close to a point, these terms approximately yield the object
reconstruction. The image of the hidden object can therefore be retrieved by reading it from one of these
orders. Note that in this scheme the distance of the reconstructed object from the output plane origin is
related directly to the transverse gap between the object and the reference point. To extract the depth
information about the object we use the principle of stereoscopic vision [3]. That is, different perspectives
of our viewing system (disparity) lead to slight relative displacements of the object (depth) in the two views
channels of scene.
a130_1.pdf
WD6.pdf
© 2006 OSA/BOSD, AOIMP, TLA 2006