Creating See-Around Scenes using Panorama Stitching Saja Alferidah King Faisal University Saudi Arabia, Alahsa Email:saja.alferidah@gmail.com Nora A. Alkhaldi King Faisal University Saudi Arabia, Alahsa Email: nalkhaldi@kfu.edu.sa Abstract—Image stitching refers to the process of combining multiple images of the same scene to produce a single high- resolution image, known as panorama stitching. The aim of this paper is to produce a high-quality stitched panorama image with less computation time. This is achieved by proposing four combinations of algorithms. First combination includes FAST corner detector, Brute Force K-Nearest Neighbor (KNN) and Random Sample Consensus (RANSAC). Second combination includes FAST, Brute Force (KNN) and Progressive Sample Consensus (PROSAC). Third combination includes ORB, Brute Force (KNN) and RANSAC. Fourth combination contains ORB, Brute Force (KNN) and PROSAC. Next, each combination involves a calculation of Transformation Matrix. The results demonstrated that the fourth combination produced a panoramic image with the highest performance and better quality compared to other combinations. The processing time is reduced by 67% for the third combination and by 68% for the fourth combination compared to stat-of-the-art. I. I NTRODUCTION T HE STUDY of panoramic imaging is one of the advanced research topics in the field of computer vision, graph- ics and image processing [1]. Panorama Stitching is defined when two or more images of the same scene are taken by rotating a camera about its axis. As a result of this process a wider panorama image is created by overlapping the common contents of each component image [2]. In 1997, Szelinski and Shum defined creating a larger panorama image as the integration and overlapping the common contents of two or more images of the same scene by rotating the camera about its axis. In 2017, Wand et al. defined panorama stitching as taking multiple images with an overlapping area and stitching them together into a single wide image [3][4]. In 2015, Hee- kyeong Jeon et al. classified the panorama stitching process as the three core steps of detecting features, matching them, and stitching [2]. Early panorama images were created by sliding a slit-shaped aperture across a photographic film. The digital approach of today extracts thin, vertical strips of pixels from the frames of a sequence captured by a translating video camera. The resulting image is considered as multi-viewpoint (or multi-perspective), because different strips of the image are captured from multiple viewpoints [4]. Strip panoramas are created from a translating camera with many variants, such as "pushbroom panoramas" [5], "adaptive manifolds" [6], and "x-slit" images [7]. Contrary to the hardware-based approach, many researchers have explored the multi-perspective render- ings of 3D models [8][9]. Yu and McMillan presented a model that describes a multi-perspective camera [10]. Panoramic image stitching is used in a variety of environment, including gaming, virtual reality, virtual museums, and map applications [11]. Microsoft Research, for example, is spending on research projects featuring panorama stitching techniques, and many algorithms are designed to efficiently facilitating the creation of panoramic images through stitching [12][13]. II. BACKGROUND Most researchers classify panorama stitching as either a direct technique or a feature-based technique [11][14]. The direct technique compares pixel to pixel between images and the feature-based technique compares all features within each image [14]. This paper applies the feature-based technique as it is more advanced, faster, and flexible when compared to the direct technique. Producing a panorama stitching for two or more images of the same object is divided into three steps. First, the process discovers the points of interest between sev- eral images (the keypoints) and extract vector features around each of these points of interest (the descriptors). Second, identifies the matching lines between several images using the extracted features after that match the correct features and remove incorrect features. Third, find the transformation matrix that satisfies matching with the other keypoints, and use this transformation to align the two images before merging. Panorama stitching is considered through two perspectives. The first is camera rotation, where images are acquired with the camera positioned at the same point while being rotated to provide multiple views of the same object. The second perspective is camera translation, where the camera is not fixed at the same position but is moved through a linear translation to capture the second image. This paper focuses on the second prospective where two images are taken for the same scene and with a slight linear displacement. Consider a car moving towards an intersection with a large building on the corner obstructing its view. If an image is taken from a point ahead of its current position and stitched with another image at its current position, such that the integrated image shows the two overlapped as a semi-transparent view, then this image can enable drivers to have a partial view of the scene behind the building. This work helps to create a vision Proceedings of the Federated Conference on Computer Science and Information Systems pp. 293–301 DOI: 10.15439/2019F282 ISSN 2300-5963 ACSIS, Vol. 18 IEEE Catalog Number: CFP1985N-ART c 2019, PTI 293