International Journal of Latest Research in Engineering and Technology (IJLRET) ISSN: 2454-5031 www.ijlret.comǁ Volume 2 Issue 1ǁ January 2016 ǁ PP 42-51 www.ijlret.com 42 | Page 3D Reconstruction from Single 2D Image Deepu R, Murali S Department of Computer Science & Engineering Maharaja Research Foundation Maharaja Institute of Technology Mysore, India Abstract: The perception of 3D scene with stereovision is the capability of human vision but it is a challenge to computer systems. The challenge is to obtain 3D geometric shape information from planar images. This is termed as 3D reconstruction. It is usually done in a piece-wise fashion by identifying various planes of the scene and from those planes constructing a representation of the whole. The image is captured from a calibrated camera. The captured image is perspectively distorted. These distortions are removed by corner point estimation. The selected points give the true dimensions in the input image using view metrology. Then a 3D geometrical model is constructed in VRML according to true dimensions from metrology. In rendering, a texture map is created to each corresponding surface of a polygon in VRML model. VRML supports for walkthrough to a rendered model, through which different views are generated. 1. Introduction As a popular research topic, image-based 3D scene modelling has attracted much attention in recent years. It has wide range of applications in civil engineering where civil engineers need to generate 3D views to conceive their product to customer. It is used in games and entertainment, to create virtual reality and also for robot navigation inside the building. In some cases, buildings that have disappeared can be modelled from as little as a single image, for example, an old photograph or a painting. To generate 3D views, one solution is to use specialized devices for acquiring 3D information of the image. Since these devices are cost expensive, it is not possible to use at all time. The other solution is to manual generation. The user requires prior knowledge about the object and also needs engineering skills. To generate thousands of models it is a time consuming process. Hence, there is a need for generating 3D views with less user interaction. 2. Problem Statement The main objective of 3D modelling is to computationally understand and model the 3D scene from the captured images, and provide human-like visual system for machines. The approach begins by capturing the 2D image from calibrated camera. Captured images are usually perspectively distorted. These perspective distortions are eliminated to get true dimensions of the image. 3D model is constructed in Virtual reality modelling language (VRML) using the true dimensions. VRML is a tool that supports visual representation of the 3D model. Then the perspectively corrected images are mapped to corresponding planes in VRML model to create realistic effect. VRML supports walk- through simulation in a 3D space, so that the user can navigate the model according to the requirement. 3. Existing Methodology Most of the existing methods paid attention in generating 3D views from stereo images, sequence of monocular images and combining both these images to get 3D views. Each of their approach differs in cost, amount of time and user interaction for generating 3D models. Existing approach presented a 3D surface reconstruction from sequence of images. Structure from motion (SFM) method is used to perform automatic calibration and depth map is obtained by applying multi – view stereoscopic depth estimation for each calibrated image. For rendering two different texture based rendering techniques view dependent geometry and texture (VDGT) and multiple local methods (MLM) are used. Another approach used Potemkin model to support reconstruction of the 3D shapes of object instance. They stored different 3D oriented shape primitives of fixed 3D positions in a class model. They label the each images of a class for learning. A 2D view – specific recognition system returns the bounding box for the detected object from an image. Then a model based segmentation method is used to obtain object contour, using that object outline individual parts in the model is obtained. Shape context algorithm match and deform the boundaries of the stored part –labelled image for detected instance, thus it generated 3D