Appearance-based robot navigation Luca Regini, Guido Tascini, Primo Zingaretti Istituto di Informatica, University of Ancona {regini, tascini, zinga}@inform.unian.it Abstract. The paper presents a robot navigation vision-system based on an appearance- based image-matching process and a stochastic position evaluator. In particular, the use of colour sets, weighted walkthroughs and POMDPs are the main peculiarities. The choice of the representation of image appearances is fundamental. We use image- domain features, as opposed to interpreted characteristics of the scene, and we adopt feature vectors including both the chromatic attributes of colour sets and their mutual spatial relationships. The colour sets are obtained by auto-thresholding the colour histograms and weighted walkthroughs are used to capture the spatial relationships between colour sets. Information provided by the appearance-based image matching is used for updating a POMDP describing the belief distribution about the actual position of the robot in the environment. 1 Introduction This paper describes an appearance-based robot navigation system that does not require modifications to the environment. Autonomous robot navigation has to face three major problems: self-localisation, obstacle avoidance and path planning. In this paper we are mainly concerned with the self-localisation problem, or positioning, which consists in the determination of the position and orientation (often referred to as the pose) of a mobile robot in its environment. There are numerous approaches to self-localisation, and there are different sensor modalities as well. In this paper we shall only consider self-localisation by means of vision. Due to the tremendous amount of information provided by visual sensing about the robot’s environment, the extraction of visual features for positioning is not an easy task. A preliminary task becomes the selection of an appropriate set of visual features (often referred to as landmarks) to be used for the navigational trials [6, 20]. Another difficulty of a self-localisation process is to solve the matching between the observations provided by the robot’s sensors and the landmarks themselves [7, 19]. This problem could be considerably simplified if a robot’s pose (localisation prediction) is given a priori. On the contrary, without prediction, the absolute matching is quite difficult because the observations are not error-free. Only recently, appearance-based, or view-based, approaches have been proposed [1, 5, 8, 14]. An appearance-based approach provides qualitative measurements of the position of the robot, thus monitoring the progress of the overall mission. Once certain relevant positions are attained, other navigation behaviours are launched. This strategy is further justified by the fact that the positioning accuracy depends on the distance and angle between the robot and the landmark.