Real-Time Visual Behaviours for Navigating a Mobile Robot Gordon Cheng and Alexander Zelinsky Department of Systems Engineering Research School of Information Sciences and Engineering Australian National University Canberra, ACT 0200. Australia email: Alex.Zelinsky@anu.edu.au Abstract In this paper we present an approach for using vision as the primary source of sensing to guide a mobile robot in an unknown environment. We define a set of primitive visual behaviours for navigating a mobile robot in real- time. By combining such behaviours with a purposive map, our mobile robot exhibits a goal seeking behaviour. We present a fast segmentation technique for vision processing. This processing technique is used by different behaviours to produce an overall competent behaviour in our Yamabico robot. Experimental results show that our robot can navigate competently in dynamic indoor environments. 1. Introduction For robots to work harmoniously within an integrated environment with humans the robots must have navigational skills. In a human only environment, human may tolerate collisions with one another if they did not cause much pain. This level of tolerance may or may not apply to robot-human environments. Humans expect that robots will be able to operate and navigate in their environments without collisions or interference. The use of vision in mobile robotics has become wide spread. Many researchers have applied vision to collision avoidance such as Horswill [1] with the “Polly” robot and Gomi et al. [2] with their “Office Messenger Robot”. Vision has been used in different ways for example door opening was reported by Nagatani et al. [3] and car assembly by Kimura et al. [4]. Traditionally most techniques follow the conventional AI ideas of building a model of the environment together with having a thorough understanding of such an environment in order to perform navigational planning [7]. These techniques become constrained by the computational power of the machine that is being used. For obstacle avoidance most methods try to determine the position of each obstacles in order to move around them. We agree with Horswill’s [1] approach in which obstacle avoidance is based on searching for features that are physically grounded e.g. carpet. We refer to this as free space robot motion. Humans have a good sense of direction, and can navigate without using precise coordinates for localisation [16]. Navigation is done relative to landmarks [17]. Our research goal is to develop a robot navigation system that is similar to the human system. Humans navigate using vision very successfully. This is something we take for granted. Consider a young child that has acquired the ability to crawl. The child wanders around the floor investigating its new environment. The child’s motor skills are well developed for manoeuvring on the floor. The child learns to avoid furniture and moving objects such as people. The young baby may wander around the floor until it finds a toy it likes. To give a child a purposeful desire to move in a particular direction a parent could show the child their favourite toy. The child responds to this stimulus by moving toward the toy. At the same time the child will avoid any obstacles in its way. From this example we see that the child has leant basic navigation skills. Through this development process we can established that all the essential behaviours were induced [9]. First the child gained motor skills, then with the assistance of vision is able to navigate. From our point of view three primary visual-based behaviours are needed; goal seeking, obstacle avoidance and collision avoidance. These we believe are the fundamental behaviours for a mobile robot to achieve a much more natural acceptance in the human world.