125
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Chapter 8
DOI: 10.4018/978-1-4666-3994-2.ch008
INTRODUCTION
Techniques are developed to reconstruct objects/
surfaces in 3D space. These techniques use
groups of images taken by cameras. Variations
of the problem include 3D reconstruction from
uncalibrated monocular image sequence (Aze-
vedo, Tavares, & Vaz, 2009, Fitzgibbon, Cross,
& Zisserman, 1998, Pollefeys, Koch, Vergauwen,
& Gool, 1998); 3D reconstruction from calibrated
monocular image sequence (Nguyen & Hanajik,
1995); and 3D reconstruction from stereo images.
This later case includes pairs of images taken at
the same time by two cameras or at two different
instants by one camera provided that the scene is
static. In many cases, the solution is divided into
two steps (Zhang, 1995). These steps are:
1. Extracting and matching features between
corresponding images; and
2. Determining structure from corresponding
features.
Rimon Elias
German University in Cairo, Egypt
Projective Geometry for
3D Modeling of Objects
ABSTRACT
This chapter surveys many fundamental aspects of projective geometry that have been used extensively in
computer vision literature. In particular, it discusses the role of this branch of geometry in reconstructing
basic entities (e.g., 3D points, 3D lines, and planes) in 3D space from multiple images. The chapter presents
the notation of diferent elements. It investigates the geometrical relationships when one or two cameras
are observing the scene creating single-view and two-view geometry. In other words, camera parameters
in terms of locations and orientations, with respect to 3D space and with respect to other cameras, create
relationships. This chapter discusses these relationships and expresses them mathematically. Finally, dif-
ferent approaches to deal with the existence of noise or inaccuracy in general are presented.