Vision-Based Shipwreck Mapping: on Evaluating Features Quality and Open Source State Estimation Packages A. Quattrini Li, A. Coskun, S. M. Doherty, S. Ghasemlou, A. S. Jagtap, M. Modasshir, S. Rahman, A. Singh, M. Xanthidis, J. M. O’Kane and I. Rekleitis Computer Science and Engineering Department, University of South Carolina Email: [albertoq,yiannisr,jokane]@cse.sc.edu, [acoskun,dohertsm,sherving,ajagtap,modasshm,srahman,akanksha,mariosx]@email.sc.edu Abstract—Historical shipwrecks are important for many rea- sons, including historical, touristic, and environmental. Cur- rently, limited efforts for constructing accurate models are performed by divers that need to take measurements manually using a grid and measuring tape, or using handheld sensors. A commercial product, Google Street View 1 , contains underwater panoramas from select location around the planet including a few shipwrecks, such as the SS Antilla in Aruba and the Yongala at the Great Barrier Reef. However, these panoramas contain no geometric information and thus there are no 3D representations available of these wrecks. This paper provides, first, an evaluation of visual features quality in datasets that span from indoor to underwater ones. Second, by testing some open-source vision-based state estimation packages on different shipwreck datasets, insights on open chal- lenges for shipwrecks mapping are shown. Some good practices for replicable results are also discussed. I. I NTRODUCTION Historical shipwrecks tell an important part of history and at the same time have a special allure for most humans, as exemplified by the plethora of movies and artworks of the Titanic. Shipwrecks are also one of the top scuba diving attractions all over the world, see Fig. 1. Many historical shipwrecks are deteriorating due to warm, salt water, human interference, and extreme weather (frequent tropical storms). Constructing accurate models of these sites will be extremely valuable not only for the historical study of the shipwrecks, but also for monitoring subsequent deterioration. Currently, limited mapping efforts are performed by divers that need to take measurements manually using a grid and measuring tape, or using handheld sensors [1]—a tedious, slow, and sometimes dangerous task. Automating such a task with underwater robots equipped with cameras—e.g., Aqua [2]—would be extremely valuable. Some attempts have been performed by using underwater vehicles with expensive setup—e.g., Remote Operated Vehicles (ROV) [3], [4]. Autonomous mapping using visual data has received a lot of attention in the last decade, resulting in many research papers and open source packages published, supported by impressive 1 https://www.google.com/maps/streetview/#oceans Fig. 1. Aqua robot at the Pamir shipwreck, Barbados. demonstrations. However, applying any of these packages on a new dataset has been proven extremely challenging, because of two main factors: software engineering challenges, such as lack of documentation, compilation, dependencies; and algorithmic limitations—e.g., special initialization motions for monocular cameras, number of and sensitivity to parame- ters [5]. Also, most of them are usually developed and tested with urban settings in mind. This paper analyzes first different feature detectors and descriptors in several datasets taken from indoor, urban, and underwater domains. Second, some open source packages for visual SLAM are evaluated. The main contribution of this paper is to provide, based on this evaluation, insights on the open challenges in shipwreck mapping so that when designing a new mapping algorithm they are taken into consideration. The next section discusses research on shipwreck mapping. Section III presents an analysis of the visual feature quality. Section IV shows qualitative results of some visual SLAM al- gorithms. Finally, Section V concludes the paper by discussing some insights gained by this methods evaluation. II. RELATED WORK Different technologies have been used to survey shipwreck areas, including ROVs, AUVs, and diver held sensors. Nornes et al. [6] acquired from an ROV stereo images off the coast of