Delivering Test and Evaluation Tools for Autonomous Unmanned Vehicles Johns Hopkins APL Technical Digest, Volume 33, Number 4 (2017), www.jhuapl.edu/techdigest 279 Delivering Test and Evaluation Tools for Autonomous Unmanned Vehicles to the Fleet Galen E. Mullins, Paul G. Stankiewicz, R. Chad Hawthorne, Jordan D. Appler, Michael H. Biggins, Kevin Chiou, Melissa A. Huntley, Johan D. Stewart, and Adam S. Watkins ABSTRACT The Johns Hopkins University Applied Physics Laboratory (APL) is working to develop the next generation of test and evaluation (T&E) tools for maritime, air, and ground autonomous sys- tems. Advancement in autonomy on unmanned vehicles is outpacing test ranges’ capability for effective T&E of these systems. DoD test ranges face the challenge of being able to execute a very limited number of live tests to validate increasingly complex systems. APL is performing research and development to help solve the cost and reliability issues associated with on-range T&E of autonomous systems. Using advanced optimization techniques to intelligently explore the highly complex state space in which autonomous systems operate, the Range Adversarial Planning Tool (RAPT) team is developing tools for test ranges to identify the most relevant tests for the full scope of maritime, airborne, and ground-based autonomous systems. The principal challenge with testing and evaluating autonomy is addressing the complex, NP-Hard (non-deterministic polynomial-time hard) interactions between autonomy and the environment. Decomposing the problem only exacerbates the situation by producing an intractable set of options in various mis- sion conditions and internal states of an autonomous system. Therefore, autonomy can be evalu- ated only with precise understanding of the interactions between the autonomous vehicle and the environment, enabling delineation of which situations are effective from a T&E perspective and which are not. duce a new method for intelligently generating test sce- narios that inform testers on the expected performance of the autonomous system. Our approach to designing informative test cases differs from recent work in validating autonomous sys- tems. We are not focusing on fault detection 1 or model checking 2 of the underlying decision engine. Instead of modeling the underlying behavior of a black-box INTRODUCTION Designing test scenarios for validation and verifica- tion of autonomous vehicles is currently an expensive and involved process. Testing requires the input of subject-matter experts who are thoroughly versed in the behaviors of both the platform and the autonomous decision engine under test. Performing live tests is also very time consuming, which severely limits the number of tests that can be performed. In this article, we intro-