Interpolation of Lost Frames of a Video Stream
using Object based Motion Estimation and
Compensation
Amrit Kaur, Pradip Sircar, Adrish Banerjee
Department of Electrical Engineering
Indian Institute of Technology Kanpur
Kanpur 208016, India
Abstract - While transmitting a video stream some frames may
be lost due to noise or congestion in the network. For
interpolating the lost frames using the received frames various
techniques were proposed, but these techniques are good only
for slow motion video. For fast motion video, these
interpolating techniques create artifacts in the interpolated
frames. We propose a new technique for interpolating lost
frames using object based motion estimation and
compensation.
The proposed method is based on the estimation of
displacements of the minimum bounding box (MBB) sides of
an object. From the received frames we first detect the type of
motion (translation, rotation, part rotation) that the object has
undergone. Then, after detecting the motion and the
displacement of the object from one received frame to another
received frame, the object in the missing frame is linearly
interpolated from the object motion and the positions of the
object in the two received frames.
I. INTRODUCTION
Transmitting multimedia content over networks is
becoming practical and prevalent owing to increasing
transmission speed and better compression. Multimedia
contents include video, audio, audio-video combinations,
and presentations. Being comprised of packets during their
transmission, movies, and especially the bulky video
content as opposed to the audio content, are thus subject to
loss. Several approaches could be applied to remedy the
loss problem.
One approach is to compensate for all the lost frames at the
receiver, by estimating them. In situations where this is too
costly in time and/or memory hardware, it would be best to
attempt to compensate for as many lost video frames as
possible. Another approach, which is preventative, can be
applied at the sender as opposed to the receiver. It adds
extra frames to the movie video stream before it is
transmitted. It was previously found that the loss of five
frames does not affect much the quality of the viewers'
perception; we can add five frames to the video stream
before it is transmitted. The number of video frames added
to the stream will still maintain the video and audio streams
within a highly acceptable tolerance level. In this case, we
can afford to lose up to ten more frames during
transmission while still maintaining a highly acceptable
synchronization level. In this paper, we investigate the first
approach, which is the full estimation of all the lost video
frames, thus bringing the synchronization level back to
what it was before the transmission of the streaming movie.
Motion tracking between two images is the process by
which portions of the first image are mapped to existing
portions of the second image. It would thus be known to a
certain degree of certainty based on quantitative measures
that a given portion of the first image has moved to another
location in the second image.
The concept of motion tracking is used to estimate motion
between existing frames in a movie stream, and hence to
estimate lost frames in between.
Given a sequence of frames with several frames lost or
corrupted in the middle, we use the two surrounding frames
of the lost sequence to estimate the motion of blocks
between frames [1]. The locations of the objects in lost
frames are linear interpolations of the block motion as
shown in Fig. 1.
Fig. 1: Motion Tracking for Frame Estimation
978-1-4244-2746-8/08/$25.00 © 2008 IEEE
Authorized licensed use limited to: IEEE Xplore. Downloaded on March 18, 2009 at 06:56 from IEEE Xplore. Restrictions apply.