I.J. Image, Graphics and Signal Processing, 2016, 7, 41-48
Published Online July 2016 in MECS (http://www.mecs-press.org/)
DOI: 10.5815/ijigsp.2016.07.05
Copyright © 2016 MECS I.J. Image, Graphics and Signal Processing, 2016, 7, 41-48
Motion Segmentation from Surveillance Video
using modified Hotelling's T-Square Statistics
Chandrajit M
1
Maharaja Research Foundation, MIT, Mysore, India
E-mail: chandrajith.m@gmail.com
Girisha R
2
, Vasudev T
1
PET Research Foundation, PESCE, Mandya, India
E-mail: write2girisha@gmail.com, vasu@mitmysore.in
Abstract—Motion segmentation is an important task in
video surveillance and in many high-level vision
applications. This paper proposes two generic methods
for motion segmentation from surveillance video
sequences captured from different kinds of sensors like
aerial, Pan Tilt and Zoom (PTZ), thermal and night vision.
Motion segmentation is achieved by employing
Hotelling's T-Square test on the spatial neighborhood
RGB color intensity values of each pixel in two
successive temporal frames. Further, a modified version
of Hotelling's T-Square test is also proposed to achieve
motion segmentation. On comparison with Hotelling's T-
Square test, the result obtained by the modified formula is
better with respect to computational time and quality of
the output. Experiments along with the qualitative and
quantitative comparison with existing method have been
carried out on the standard IEEE PETS (2006, 2009 and
2013) and IEEE Change Detection (2014) dataset to
demonstrate the efficacy of the proposed method in the
dynamic environment and the results obtained are
encouraging.
Index Terms—Motion segmentation,Video surveillance,
Spatio-temporal, Hotelling's T-Square test.
I. INTRODUCTION
Video surveillance has become one of the most active
areas of research in computer vision. Generally, video
surveillance system involves activities like motion
segmentation, object classification, object recognition,
object tracking and motion analysis. Moving object
segmentation is extracting the regions of the video frame
which are non-stationary. Object classification is
classifying the objects such as a person, vehicle or animal.
Identifying the object of interest is object recognition.
Motion tracking is establishing frame by frame
correspondence of the moving object in the video
sequence. Finally, analyzing the object motion and
interpretation leads to motion analysis.
Motion segmentation is a vital task in video
surveillance as the subsequent tasks of the video
surveillance system are dependent on the accurate output
of motion segmentation. The surveillance video
sequences are generally captured through different
sensors like aerial, PTZ, thermal and night vision. The
captured sequence consists of noise and illumination
variations, which makes the motion segmentation from
surveillance videos a challenging task [35, 36, 38].
Therefore, the research focuses on developing efficient
and reliable motion segmentation algorithm to overcome
the mentioned limitations and to extract foreground
information from the image data for further analysis.
Several techniques are proposed in the literature for
motion segmentation and these techniques can be
categorized as conventional background subtraction [14],
statistical background subtraction [2, 10, 21, 28, 30, 32,
34, 37], temporal differencing [1, 5, 6], optical flow [9]
and hybrid approaches [3, 4, 7, 8, 15, 16, 17, 31, 33, 40].
The conventional background subtraction technique
initially builds the background model and the new frame
is subtracted from the background model for motion
segmentation. The statistical background subtraction
technique builds the background model by using
individual pixel or group of pixels dynamically and then
each pixel from the current frame is treated as foreground
or background by comparing the statistics of the current
background model. In the temporal difference method,
absolute difference of successive frames is done to
segment motion pixels. Optical flow technique computes
the flow vectors of every pixel and then segments the
moving object. The hybrid techniques use the
combination of above techniques for segmentation of
moving objects in video sequences [11]. A brief review
of the existing works is discussed in the subsequent
section.
This paper proposes two generic methods for
segmenting moving objects from surveillance video in the
dynamic environment by fusing spatial neighborhood
information from color video frames in a temporal
statistical framework. The article is organized as follows.
Section 2 reviews the related works on segmentation
methodologies. The overview of the proposed work is
described in Section 3. Section 4 elaborates the proposed
work. The experimental results and conclusion are
reported in Section 5 and 6 respectively.