Space-time Parameterized Variety Manifolds: A Novel Approach for Arbitrary
Multi-perspective 3D View Generation
Mansi Sharma
Department of Electrical Engineering
Indian Institute of Technology, Delhi
Hauz Khas, New Delhi, India-110016
mansisharmaiitd@gmail.com
Santanu Chaudhury, Brejesh Lall
Department of Electrical Engineering
Indian Institute of Technology, Delhi
Hauz Khas, New Delhi, India-110016
{santanuc,brejesh}@ee.iitd.ac.in
Abstract—This paper presents a novel image variety-based
approach that elegantly models the space of a broad class
of perspective and non-perspective stereo varieties within a
single, unified framework. The basic concept of parameterized
variety presented earlier by Genc and Ponce [1] is extended
to represent the non-linear space of images. An efficient
algebraic framework is constructed to parameterize the variety
associated with full perspective cameras. The algorithm seeks
the manifolds that constrain this space of six-dimensional
variety to generate compelling multi-perspective 3D effects
from arbitrary virtual viewpoints.
Combining geometric space of multiple uncalibrated per-
spective views with appearance space in a globally optimized
way leads to numerous potential applications, especially in
content creation for multi-perspective 3DTV. The proposed
approach works for uncalibrated static/dynamic scenes, con-
taining parallax and unstructured object motion. It even
seamlessly deals with images or video sequences that do not
share a common origin, thus provides an effective tool for
montaging, indexing and virtual navigation.
Keywords-parameterized variety; arbitrary view generation;
multi-perspective stereo; 3DTV; video synopsis; video tapestries
I. I NTRODUCTION
Three-dimensional visual illusions created by routinely
offered 3D displays are still quite far from being realistic.
In a “real world” scenario, our perspective changes as we
move around the scene. Existing auto-stereoscopic displays
based on the light field concept [9]–[11] acquire viewing
parallax for perspective images only. A little effort is made
in building non-perspective displays, mainly due to the com-
plex nature of generating multi-perspective stereo varieties.
Presenting 3D views that vary with changing perspectives
have recently gained much attention in vision and 3D
graphics community [12]. As an alternative to holographic
display technology, the recently introduced “tensor-display”
provides an effective way of creating multi-perspective 3D
images by representing the scene in a mathematical frame-
work of tensor algebra [12]. However, the method still
suffers from inherent limitations of light fields and require
high sampling density, large amount of image data, complex
hardware capabilities to reproduce the tiny pixels and avoid-
ing excessive blurriness. Another major issue in rendering
multi-perspective images is how to align the images to
reduce distortions introduced by projection techniques. The
common projection techniques are more prone to suffer from
depth-related distortions, particularly in dealing with non-
planar scenes of complex geometry [13], [17]. It is difficult
to handle arbitrary video streams for creating long multi-
perspective views [14], [15], [17]. The problem becomes
more difficult in dealing with unstructured images or casual
captured video sequences. Misalignment, unordered object
motion, 3D parallax leads to severe distortion artifacts [16],
[17]. In this paper, a novel algebraic framework based on the
concept of parameterized variety is developed to represent
the complex non-linear space of images. The proposed
unified representation models the space of a broad class of
perspective and non-perspective stereo varieties, and give a
workable, effective solution to these problems.
Parameterized image variety (or PIV) was proposed earlier
by Genc and Ponce [1] for image based rendering. It was
shown that the set V of all views of n 3D points is
a six dimensional variety of vector space R
2n
for weak
perspective, paraperspective and full perspective cameras.
The parameterization of variety in weak perspective and
paraperspective cases were proposed earlier [1]. One major
contribution of our work lies in the generalization of this
approach to full perspective cameras [7]. The extension
allows to render photo-realistic novel views from arbitrary
viewpoints without using any calibration or explicit depth
information. The main result presented in [7] is the construc-
tion of explicit parameterization of 3D space to synthesize
a sequence of “physically-valid” perspective views from
arbitrary viewpoints with explicit occlusion modelling. An
arbitrary virtual multi-perspective view is generated by seek-
ing a continuous manifold through the space-time volume of
virtual rendered perspective views. The synthesized virtual
views form a subspace of the space of all perspective images
of the scene, induced by variety. The ordered collection
of these novel synthesized views creates a video volume.
There are various ways to cut through this volume. Each
cut induces a surface which may contain annoying artifacts.
The proposed algorithm automatically selects an optimal
2013 International Conference on 3D Vision
978-0-7695-5067-1/13 $26.00 © 2013 IEEE
DOI 10.1109/3DV.2013.54
358