Advances in Cognitive Systems (2016) 1–16 Submitted 4/2016; published 6/2016 Goals, Utilities, and Mental Simulation in Continuous Planning Pat Langley PATRICK. W. LANGLEY@GMAIL. COM Mike Barley MBAR098@CS. AUCKLAND. AC. NZ Ben Meadows BMEA011@AUCKLANDUNI . AC. NZ Department of Computer Science, University of Auckland, Private Bag 92019, Auckland 1142 NZ Dongkyu Choi DONGKYUC@KU. EDU Department of Aerospace Engineering, University of Kansas, Lawrence, KS 66045 USA Edward P. Katz E. P. KATZ@IEEE. ORG Silicon Valley Campus, Carnegie Mellon University, Moffett Field, CA 94035 USA Abstract Like humans, autonomous agents will need to operate in physical settings that involve competing objectives which may be incompatible and vary in importance over time. They will also need to reason about both qualitative relations and quantitative attributes to produce behavior that is appro- priate to the situation. In this paper, we report PUG, a problem-solving architecture that combines conceptual inference with plan generation, encodes both relational and quantitative content, and integrates symbolic goals with numeric utilities. Mental simulation plays a key role in evaluating operators, guiding search, and determining when a plan is acceptable. We describe the system’s representational assumptions and the mechanisms that operate over them, after which we demon- strate its behavior in settings that involve conflicting goals with utilities that vary over time. In closing, we discuss related work and outline our plans for future research. 1. Background and Motivation There is general agreement that autonomous agents have the potential to aid society in many ways. We will say that a system is autonomous if it operates over time, responds adaptively to its situation, and, although it may interact with others, decides for itself which actions to take, which goals to pursue, and how to allocate its physical and cognitive resources. Humans clearly exhibit substantial autonomy, and attempting to model this ability in computational terms, at least in its high-level features, offers a promising path toward replicating it in machines. Consider a scenario in which a robotic agent with a number of goals pursues an extended plane- tary mission. Some goals involve achieving desired situations, such as depositing sensors at specific target sites, while others revolve around maintaining certain conditions, such as having enough fuel, or avoiding others, such as dangerous areas. A complex mission will involve many such goals, some even mutually exclusive, so that the agent must decide which ones to pursue. Moreover, these goals may only be active under some conditions and they may have different values at different times. We desire a computational theory that supports all of these abilities. c 2016 Cognitive Systems Foundation. All rights reserved.