Contents lists available at ScienceDirect
Optics Communications
journal homepage: www.elsevier.com/locate/optcom
Subpixel based defocused points removal in photon-limited volumetric
dataset
Inbarasan Muniraj
a
, Changliang Guo
a
, Ra'ed Malallah
a,b
, Harsha Vardhan R. Maraka
c
,
James P. Ryle
a
, John T. Sheridan
a,
⁎
a
School of Electrical and Electronic Engineering, IOE
2
Lab, University College Dublin, Belfield, Dublin 4, Ireland
b
Physics Department, Faculty of Science, University of Basrah, Garmat Ali, Basrah, Iraq
c
School of Physics, University College Dublin, Belfield, Dublin 4, Ireland
ARTICLE INFO
Keywords:
Photon counting imaging
Three-dimensional integral imaging
Bayer Image
Image segmentation
Color image processing
ABSTRACT
The asymptotic property of the maximum likelihood estimator (MLE) has been utilized to reconstruct three-
dimensional (3D) sectional images in the photon counting imaging (PCI) regime. At first, multiple 2D intensity
images, known as Elemental images (EI), are captured. Then the geometric ray-tracing method is employed to
reconstruct the 3D sectional images at various depth cues. We note that a 3D sectional image consists of both
focused and defocused regions, depending on the reconstructed depth position. The defocused portion is
redundant and should be removed in order to facilitate image analysis e.g., 3D object tracking, recognition,
classification and navigation. In this paper, we present a subpixel level three-step based technique (i.e. involving
adaptive thresholding, boundary detection and entropy based segmentation) to discard the defocused sparse-
samples from the reconstructed photon-limited 3D sectional images. Simulation results are presented
demonstrating the feasibility and efficiency of the proposed method.
1. Introduction
The invention of three dimensional (3D) computational integral
imaging (II), a technique based on Integral Photography (IP), has made
auto-stereoscopic (i.e., glass free) 3D scene visualization possible [1–
5]. Since its introduction, applications of II have been proposed in
various research areas, e.g., 3D object sensing, biomedicine, under-
water visualization, and automated target recognition [6–10]. In some
special imaging cases (i.e., biomedical imaging), low-light level illumi-
nation is encountered and processing the resulting data sequences
becomes necessary. Recently, one method for reconstructing multi-
spectral 3D objects under photon-starved (also known as photon-
limited or photon-counted) illumination conditions has been proposed
[11]. It has been shown that, contrary to the conventional imaging
process i.e., when dealing with three color channels independently
[12], the results from multispectral imaging systems can be processed
using a single channel or monochromatic system (i.e., as a greyscale
image) by utilizing the Bayer patterned image sensor format [13,14]. In
this way, a clear perception of the 3D scene can be achieved and it
becomes much easier to interpret complex scenes and to recognize
specific objects from clusters [11].
Furthermore, it has been reported that by recording high spatial
frequency data, from the 3D object, high-resolution scene reconstruc-
tion is possible [15]. Capturing as many of the emanated rays as
possible requires use of sophisticated cameras capable of capturing
framerates of more than several hundred frames per second. This is an
expensive and time-consuming process. However, in CII, a lenslet array
is used to capture the diffracted rays from the 3D objects (located at
some arbitrary distance from sensor). Images are recorded in the form
of two dimensional (2D) the elemental images (EIs) that represent
different perspectives of the captured object [6]. Back-propagation is
then used to reconstruct the 3D images (also known as sectional or
slice images) resulting in depth information [11]. Only the objects
located at the corresponding depth distance will be simultaneously
reconstructed clearly (i.e., in focus). Other points at different depths
appear blurred (i.e., defocused). We note that these defocused points
do not provide any useful visual information and are redundant.
Therefore, they should be removed in order that better 3D visualization
can take place. The resulting datasets will then aid in high-level image
analysis [16].
In the field of computer vision, recovering depth information from
defocused points is an important problem. To achieve this, various
approaches such as stereo matching, depth from defocus (DFD), and
entropy based estimation have been proposed [17–20]. Previously,
http://dx.doi.org/10.1016/j.optcom.2016.11.047
Received 16 September 2016; Received in revised form 9 November 2016; Accepted 19 November 2016
⁎
Corresponding author.
E-mail address: john.sheridan@ucd.ie (J.T. Sheridan).
Optics Communications 387 (2017) 196–201
0030-4018/ © 2016 Elsevier B.V. All rights reserved.
crossmark