552 Planetary Rover Visual Motion Estimation improvement for Autonomous, Intelligent, and Robust Guidance, Navigation and Control J. Nsasi Bakambu *, C. Langley *, G. Pushpanathan *, W. James MacLean *, R. Mukherji *, E. Dupuis ** * MDA Corporation, Brampton, Ontario, Canada e-mail: Joseph.bakambu, chris.langley, giri.pushpanathan, raja.mukherji@mdacorporation.com; james.maclean@utoronto.ca ** Space Exploration, Canadian Space Agency, Saint-Hubert, Quebec, Canada e-mail: Erick.dupuis@asc-csa.gc.ca Abstract This paper presents the Mojave Desert field test results of an improved planetary rover visual motion estimation technique for the Autonomous, Intelligent, and Robust Guidance, Navigation, and Control for Planetary Rovers (AIR-GNC). The main improvements include: optimal use of different features from stereo-pair images as visual landmarks, and the use of VME-based feedback to close the path tracking loop. As well, a long-range and wide FOV active 3D sensor was used to extract long-range fixed landmarks for enabling visual motion estimation observability, and thus improving the accuracy of the VME. The field test, conducted in relevant Mars-like terrains, under dramatically changing weather and lighting conditions, shows good localization accuracy on average. Moreover, the MDA developed Enhanced IMU-corrected odometry was reliable and had good accuracy in all test locations including in loose sand dunes. These results are based on data collected during 7.3 km of traverses, under both fully autonomous and tele-operated control. 1 Introduction One of the continuing challenges for future unmanned exploration rover missions on the moon and Mars is the ability to accurately estimate the rover’s position and orientation. In the absence of an equivalent to the Global Positioning System on earth, designers of future system must exploit a combination of vehicle odometry, inertial sensing and visual information to obtain this localization information. Future missions such as ESA’s ExoMars and NASA’s Mars Mobile Science Laboratory rover require the rover to autonomously traverse hundreds of meters to one km daily at speeds of up to 100 m/h [1]. Equally challenging is the need to localize the position of the rover to an accuracy of between one and four percent of distance traveled. Accurate localization is arguably the most fundamental competence required for long range autonomous navigation. In this context, MDA Space Missions and the Canadian Space Agency (CSA) have embarked on a project to further their capability for visual motion estimation (VME) of planetary rovers, which is described in this paper. VME algorithms have recently seen a considerable amount of interest from the planetary exploration rover community as a solution for accurate localization. On the Mars Exploration Rovers (MERs), VME was not considered part of the main system for localization, but was shown to work relatively well. In [2], JPL reported that “Visual Odometry software has enabled precision drives over distances as long as 8 m on slopes greater than 20 degrees, and has made it possible to safely traverse the loose sandy plains of Meridiani”. The algorithm works by tracking features (Harris corners) in a stereo image pair from one frame to the next (frame-to-frame). Thus, the problem is one of determining “the change in position and attitude for two pairs of stereo images by propagating uncertainty in a 3D to 3D pose estimation formulation using maximum likelihood estimation”. The evaluation tests conducted at the JPL Marsyard and Johnson Valley, California showed that the absolute position errors were less than 2.5% over the 24 meter Marsyard course, and less than 1.5% over the 29 meter Johnson Valley course. The rotation error was less than 5.0 degrees in each case. LAAS/CNRS (Laboratoire d'Architecture et d'Analyse des Systèmes/Centre National de la Recherche Scientifique) VME is based on the frame-to-frame pixel tracking method. Landmarks are extracted from images by finding points of interest identified by image intensity gradients. The test results using the Lama rover [3] showed an error of 4% on a 25 meter traverse. After improvement of the algorithm in [4], the overall error of 2% on a 70 m traverse was maintained. A survey of the literature [5][6][7] shows that the i-SAIRAS 2010 August 29-September 1, 2010, Sapporo, Japan