Indoor vehicle navigation by means of signs zy * zyx Giovanni Adorni, Giulio Destri and Monica Mordonini Dipartimento di Ingegneria dell’Informazione Universith di Parma, 1-43 100 Parma, Italy e-mail: { adorni,destri,monica} @CE.UniPR.IT Phone: +39-521-905725 Fax: +39-521-905723 Abstract zyxwvutsr The task of an autonomous vehicle is to travel from one specific location to another with no external assis- tance. To perform this task, the “brain“ of the vehicle reasons by using data from different types of sensor In this paper we focus on data from a CCD camera on board an autonomous robot. We discuss a neural network approach f o r form perception and object clas- siJication used to control the navigation of the robot. The system provides a just-in-time recognition of mark- ers and signs placed in a partially unknown environ- ment, and enables the robot to perform the traditional goal-directed navigation in typical ofice environments. After a discussion of the architecture of the perceptual system and of the underlying computational model, we present some preliminary experimental results. 1 Introduction An autonomous vehicle is a free-roving collection of functions whose main aim is to reach a designated location in space with no external assistance. In order to move within an environment the vehicle requires one or more modes of mobility, knowledge of its posi- tion, knowledge of the external environment, environ- ment perception capability through sensors, planning and route-finding capabilities, etc. Autonomous vehicles for simple or well-structured environments are common place in military applica- tions (e.g., advanced remotely piloted vehicles), in space exploration (e.g., Voyager, Viking), in industry (e.g., automatic guided vehicles AGV). Many autonomous vehicles have been developed in the last twenty years. From JASON [14], which was among the first mobile robots to use acoustic and infrared proximity sensors for path planning and ob- stacle avoidance to NAVLAB [15], which uses neural networks for road detection and navigation. Several ex- hibitions and conferences on robotics and autonomous vehicles have been organized by different groups in the last few years demonstrating the state-of-the-art of au- tonomous mobile robots (i.e., IEEE, AAAI). These con- ferences have highlighted such tasks zyxwvut as goal-directed navigation, feature detection, object recognition, iden- *This work has been partially supported by Italian Ministry of Scientific Research under the MURST 40% “Rappresentazione delle conoscenze e meccanismi di ragionamento” contract and by National Research Council (CNR) under the “Progetto Finalizzato Trasporti 11” contract. tification, and physical manipulation, together with ef- fective human-robot communication. The 1995 Robot Competition in conjunction with the IJCAI- 1995 held in Montreal, Canada [8] proved to be a fruitful forum for the technical exchange of information between robotic research groups. This competition (the fourth organized by AAAI society and the first at an IJ- CAI conference) is the result of the success of previous competitions [6], [lo], [13]. One of the competition tasks involved navigating within an office-building en- vironment using directions entered by a human during the operation. In this paper we discuss some perceptual aspects of an autonomous vehicle, a robot able to navigate in indoor environments. As was the case during the robot competitions, our robot is able to perform the traditional goal-directed navigation task and to detect if there is an unresolvable problem (i.e., the robot has to enter a room and the door is closed), then asking help from a human operator via a radio link. The robot navigates in a typical office environment with partitioned and communicating offices. The robot has a topological knowledge of the indoor environment, some behavioral motor knowledge (i.e., navigate along the median axis of the free space or navigate close to the wall on your right), some knowledge of traffic signs and markers and their typical position in the environment, and some knowledge of its own size. The robot is asked to follow a series of instructions that tell it which room to enter. The instructions to the robot consisted of simple statements such as “exit the room and turn left” or more complex statements as “go from point 1 to room B 1 passing through room B4 applying a right wall following strategy” (see figure 1). Proceeding through the environment the robot, on the basis of the topological knowledge, looks for the markers (e.g., the name of the office) located on the door (see figure 2), and traffic signs which correspond to commands for the robot itself (i.e., a right arrow inside a circle indicates the command “turn approximately 90’ to the right and continue with your navigation strategy”). Traffic signs (if any) are located on the right side of a doorway if the sign contains a command indicating how to enter the office. They are located on the wall if the sign is a navigating command. Traffic signs can change dynamically during a navigation task: for example, a traffic sign can be cancelled form the environment, can be modified, its position can change or a new sign can be added to the environment. Markers and traffic signs 0-7803-3652-6/96/$5.00 zyxwvutsrqpo 0 IEEE