1 Abstract— A neural modeling platform known as Cog ex Machina 1 (Cog) developed in the context of the DARPA SyNAPSE 2 program offers a computational environment that promises, in a foreseeable future, the creation of adaptive whole-brain systems subserving complex behavioral functions in virtual and robotic agents. Cog is designed to operate on low- powered, extremely storage-dense memristive hardware 3 that would support massively-parallel, scalable computations. We report an adaptive robotic agent, ViGuAR 4 , that we developed as a neural model implemented on the Cog platform. The neuromorphic architecture of the ViGuAR brain is designed to support visually-guided navigation and learning, which in combination with the path-planning, memory-driven navigation agent – MoNETA 5 – also developed at the Neuromorphics Lab at Boston University, should effectively account for a wide range of key features in rodents’ navigational behavior. I. INTRODUCTION T HE ability to search the environment for features consistent with behavioral objectives, to approach a selected target, or to avoid an identified obstacle is critical for an animal’s survival. The same abilities are equally important for robots, since visual search and visually-guided navigation are the essential elements for most of the tasks robots perform. Animals are highly effective in performing navigational tasks in unknown environments; they are capable of orienting using landmarks, path-planning, approaching objects they like or avoiding objects they have already visited, depending on their behavioral objectives 6 . All these tasks are very relevant to robots, and therefore the principles guiding navigational behavior in animals as well The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense.Manuscript received February 10, 2011. This work was partially funded in part by the DARPA SyNAPSE program, contract HR0011-09-3-0001. Massimiliano Versace, Ennio Mingolla, Heather Ames, and Ben Chandler were also supported in part by CELEST, a National Science Foundation Science of Learning Center (NSF SBE-0354378 and NSF OMA-0835976). G. L. (corresponding author) is with the Department of Cognitive and Neural Systems, Boston University, Boston, MA 02215(e-mail: glivitz@gmail.com). Z. V. is with Harvard Medical School. H. A., B.C., A.G., J. L. M. V., and E. M. are with the Department of Cognitive and Neural Systems, Boston University, Boston, MA 02215. G. S., R. A., D.C., H. A., and M. S. Q. are with Hewlett-Packard Laboratories. as the neural architecture of circuits involved in navigation are of great interest to neuromorphic engineering. Visually-guided search and place-recognition triggered response are the first two types in a hierarchical typology of navigation . behavior 7 . Both these types of navigation are used when an object can be perceived, but the latter requires some form of cognitive map being employed to represent an association between the object and location to be learned by an animal. The purpose of this project was to model these types of navigation on the Cog 1 platform. Cog is designed to operate on low-powered, extremely storage-dense memristive hardware; however its hardware abstraction layer makes development on the Cog platform independent from an underling hardware platform and thus suitable for development robotic applications that are based on GPU or any other computational platform currently available. Cog is a distributed, massively-parallel software platform for running neuromorphic applications. Cog uses tensor transformation concepts to describe processing that takes place in computational nodes, where each thread transforms multiple input tensors into a single output vector (fiber). Cog adopts a completely deterministic model of data processing that enforces globally synchronous computations. Cog is being successfully used to model learning and navigation behavior in virtual environments 5 . However, the ability to use sensory information, to learn, and to deal with physical properties of the real world creates unique challenges for both modeling algorithms involved in navigation and modeling environments. Addressing these challenges is the objective of the ViGuAR project. II. COMPONENTS OF THE VIGUAR BRAIN ARCHITECTURE The architecture shown in Fig. 1 constitutes a robotic brain of an iRobot Create that learns to associate the color of an object with a reward while living in a world populated by red and green cylindrical objects of fixed size (Fig. 2). The robot receives its visual input from a netbook’s webcam. The netbook is aligned with the robot’s head direction and connected to the robot via a serial port. The serial port interface is used to send motor commands and to receive sensory (touch) events from the robot’s front bumpers. Figure 1 shows the data flow that transforms the sensory information (visual and touch) into motor behavior (Motor Output), which results in the robot approaching or avoiding an object. A brief description of functional components of the ViGuAR brain architecture follows. Visually-Guided Adaptive Robot (ViGuAR) Gennady Livitz, Heather Ames, Ben Chandler, Anatoli Gorchetchnikov, Jasmin Léveillé, Zlatko Vasilkoski, Massimiliano Versace, Ennio Mingolla, Greg Snider, Rick Amerson, Dick Carter, Hisham Abdalla, and Muhammad Shakeel Qureshi