Embodying a Cognitive Model in a Mobile Robot D. Paul Benjamin a , Damian Lyons b , Deryle Lonsdale c a Pace University Computer Science Department, 1 Pace Plaza, New York, New York 10038; b Fordham University Department of Computer & Information Science 340 JMH, Fordham University, 441 E. Fordham Rd., Bronx, New York 10458; c Brigham Young University Department of Linguistics and English Language Provo, Utah 84602 ABSTRACT The ADAPT project is a collaboration of researchers in robotics, linguistics and artificial intelligence at three universities to create a cognitive architecture specifically designed to be embodied in a mobile robot. There are major respects in which existing cognitive architectures are inadequate for robot cognition. In particular, they lack support for true concurrency and for active perception. ADAPT addresses these deficiencies by modeling the world as a network of concurrent schemas, and modeling perception as problem solving. Schemas are represented using the RS (Robot Schemas) language, and are activated by spreading activation. RS provides a powerful language for distributed control of concurrent processes. Also, The formal semantics of RS provides the basis for the semantics of ADAPT's use of natural language. We have implemented the RS language in Soar, a mature cognitive architecture originally developed at CMU and used at a number of universities and companies. Soar's subgoaling and learning capabilities enable ADAPT to manage the complexity of its environment and to learn new schemas from experience. We describe the issues faced in developing an embodied cognitive architecture, and our implementation choices. Keywords: Mobile robotics, cognitive architecture, predictive vision, learning, robot schemas, ADAPT 1.INTRODUCTION: EMBODIED COGNITION, REFORMULATION AND ROBOTICS There is a growing body of scientific work based on the belief that the mind must be understood in terms of controlling a physical body acting in the real world. This belief, often referred to as embodied cognition, strongly contrasts with the belief that the mind is an abstract computer. The abstract computational model of mind has attained some notable successes in very specific tasks such as chess, but has not done as well in robotics. Robots have a great deal of difficulty understanding their environment. Cognitive science has constructed cognitive architectures that model reasoning and learning in individual tasks and that reason about spatial and temporal relationships in simulated domains. We believe that it is time to embed a cognitive model in the physical world and test these reasoning and learning mechanisms when faced with the full complexity of the real world The ADAPT project (Adaptive Dynamics and Active Perception for Thought) is a collaboration of three university research groups at Pace University, Brigham Young University, and Fordham University to produce a robot cognitive architecture that integrates the structures designed by cognitive scientists with those developed by robotics researchers for real-time perception and control. Our goal is to create a new kind of robot architecture capable of robust behavior in unstructured environments, exhibiting problem solving and planning skills, learning from experience, novel methods of perception, comprehension of natural language and speech generation. The current generation of behavior-based robots is programmed directly for each task. The programs are written in a way that uses as few built-in cognitive assumptions as possible, and as much sensory information as possible. The lack of cognitive assumptions gives them a certain robustness and generality in dealing with unstructured environments. However it is proving a challenge to extend the competence of such systems beyond navigation and some simple tasks [46]. Complex tasks that involve reasoning about spatial and temporal relationships require robots to possess more advanced mechanisms for planning, reasoning, learning and representation.