A Framework for Integrating Symbolic and Sub-symbolic Representations Keith Clark Imperial College Bernhard Hengst University of NSW Maurice Pagnucco University of NSW David Rajaratnam University of NSW Peter Robinson University of Queensland Claude Sammut University of NSW Michael Thielscher University of NSW Abstract This paper establishes a framework that hierarchi- cally integrates symbolic and sub-symbolic repre- sentations in an architecture for cognitive robotics. It is formalised abstractly as nodes in a hierar- chy, with each node a sub-task that maintains its own belief-state and generates behaviour. An in- stantiation is developed for a real robot building towers of blocks, subject to human interference; this hierarchy uses a node with a concurrent multi- tasking teleo-reactive program, a node embedding a physics simulator to provide spatial knowledge, and nodes for sensor processing and robot control. 1 Introduction A physical symbol system as the sole basis for artificial in- telligence has been criticised by many researchers. Allen Newell and Herbert Simon introduced the physical symbol system hypothesis (PSSH) [Newell and Simon, 1976] imply- ing that human thinking is a kind of symbol manipulation pro- cess, and that we can build machines to mimic human intel- ligence. Detractors include Rodney Brooks who showed that robots with superior behaviour do not necessarily use higher level symbols at all [Brooks, 1990]. Recently the paradigm has shifted more to probabilistic robotics [Thrun et al., 2005]. Nilsson [2006] analyses some of the attacks against the PSSH and grants the need to supplement symbol systems with non- symbolic processes in intelligent systems, mostly for percep- tual and motor activities close to the environment. Our architecture for cognitive robotics accommodates both symbolic and sub-symbolic representation. The architecture comprises nodes operating at different spatial and temporal scales, linked in a hierarchy. Each node is a kind of sub-task that maintains a belief state about an abstract representation of part of the robot’s environment. It generates behaviour based on the belief state. We are not proposing an expressive language to include probabilities symbolically, but rather that nodes can use different representations to interconnect sym- bolic or probabilistic models and behaviour. The two main contributions of this paper are: 1. The formalisation of a general architecture for cognitive robotics and proof that cyclic updates of the hierarchy of nodes are well defined. 2. The instantiation of the architecture with a Baxter robot 1 tasked to build multiple towers. The main features are: A symbolic node with a concurrent multi-tasking extension of Nilsson’s Teleo-Reactive (TR) rule based robot agent programming language. A spatial node using a rigid-body simulator acting as the “mind’s eye” of the robot. The physics sim- ulator introduces common sense real-world spatial knowledge that would otherwise be cumbersome to represent by a formal symbol system. Controller nodes that process robot sensory input and generate robot motor actions. Baxter in blocks-world “Mind’s Eye” physics simulator A block and end-effector Camera Figure 1: Baxter in blocks-world. The belief state is reflected in the “mind’s eye” (physics simulator). A closeup of a block and the arm end-effector showing the co-location of the grip- per and camera. In the rest of this paper we position our robot architecture in related work and formalise the architectural framework us- ing a motivating example. We instantiate the architecture with a real-world concrete example, namely a two armed robot building towers in a blocks-world environment. Finally, we discuss robustness, limitations, and future work. 2 Related Work Several cognitive architectures have been proposed. Promi- nent ones include SOAR [Laird et al., 1987] and ACT-R [An- 1 The Baxter robot is built by Rethink Robotics, a company founded by Rodney Brooks.