Using Anchor Points to Define and Transfer Spatial Regions Based on Context Matthew Klenk Palo Alto Research Center Palo Alto, CA matthew.klenk@parc.com Nick Hawes Intelligent Robotics Lab University of Birmingham, UK n.a.hawes@cs.bham.ac.uk Kate Lockwood ITCD Department California State University - Monterey Bay klockwood@csumb.edu Graham S. Horn Intelligent Robotics Lab University of Birmingham, UK gsh148@cs.bham.ac.uk John D. Kelleher Applied Intelligence Research Centre Dublin Institute of Technology john.d.kelleher@dit.ie Abstract In order to collaborate with people in the real world, AI sys- tems must be able to represent and reason about spatial re- gions in human environments. Consider the command “go to the front of the classroom”. The spatial region mentioned (the front of the classroom) is not perceivable using geom- etry alone. Instead it is defined by its functional use, im- plied by nearby objects and their configuration. In this pa- per, we define such areas as context-dependent spatial re- gions and present a system able to learn them by combin- ing qualitative spatial representations, semantic labels, and analogy. The system is capable of generating a collection of qualitative spatial representations describing the configu- ration of the entities it perceives in the world. It can then be taught context-dependent spatial regions using anchor points defined on these representations. From this we then demon- strate how an existing computational model of analogy can be used to detect context-dependent spatial regions in previously unseen rooms by transferring the necessary anchor points. To evaluate this process we compare detected regions to annota- tions made on maps of real rooms by human volunteers. 1 Introduction Consider a janitorial robot cleaning a classroom. While per- forming this task, it encounters a teacher working with a stu- dent. The teacher tells the robot to “start at the front of the classroom”, expecting it to go to the front of the classroom and begin cleaning that area. This response requires that the robot is able to determine the spatial region in the envi- ronment that satisfies this concept. Moving from the metric space of the robot’s sensors to the symbolic space required for reasoning and language is an important problem for qual- itative reasoning. The ability to understand and reason about spatial regions is essential for AI systems performing tasks for humans in everyday environments. Some regions, such as whole rooms and corridors, are defined by clearly perceivable boundaries (e.g. walls and doors). However, many regions to which hu- mans routinely refer are not so easily defined. Consider, for example, the aforementioned region the front of the class- room. This region is not perceivable using just the geome- try of the environment. Instead, it is defined by the objects present in the room (chairs, a desk, a whiteboard), their role in this context (seats for students to watch a teacher who writes on the whiteboard) and their configuration in space (the seats point toward the whiteboard). We refer to such regions as context-dependent spatial regions (CDSRs). Current AI systems are not capable of representing and reasoning about CDSRs, yet it is an important ability. If AI systems are to collaborate with humans in everyday environ- ments then they must be able to understand and refer to the same spatial regions humans do. Many regions are best de- fined in a context-dependent manner, for example, a kitchen in a studio apartment, an aisle in a church or store, behind enemy lines in a military engagement, etc. In order to repre- sent and reason about such regions, cognitive systems must integrate different types of information, including geomet- ric, semantic, and functional knowledge. Qualitative repre- sentations provide a symbolic abstraction that is a natural method for integrating these different types of knowledge in reasoning tasks. This paper presents an artificial cognitive system (specifi- cally a mobile robot) able to represent and reason about CD- SRs. Central to our approach is the use of anchor points, symbolic expressions which link conceptual entities (e.g., CDSRs) to perceived entities (e.g., objects in the environ- ment). Our approach is founded on two assumptions. The first assumption is that CDSRs can be defined using quali- tative spatial representations (QSRs) corresponding to sen- sor data of the system (Cohn and Hazarika 2001). The sec- ond assumption is that semantically and geometrically sim- ilar areas (e.g. two different classrooms) will feature sim- ilar CDSRs, and that these similarities can be recognised through analogy. The rest of the paper is structured follow- ing these assumptions. Section 2 describes how we gen- erate QSRs from sensor data taken from an existing, state- of-the-art, cognitive system and use these to define CDSRs with anchor points. Section 3 then describes how we use the structure-mapping model of analogy (Gentner 1983) to transfer a CDSR from a labelled example to a new situation. Section 4 presents a worked example of the entire process, and Section 5 evaluates our system in comparison to data from human subjects performing the same task.