Adaptive Human aware Navigation based on Motion Pattern Analysis Søren Tranberg Hansen, Mikael Svenstrup, Hans Jørgen Andersen and Thomas Bak Abstract— Respecting people’s social spaces is an important prerequisite for acceptable and natural robot navigation in human environments. In this paper, we describe an adaptive system for mobile robot navigation. The system is based on run-time motion pattern analysis of interacting human beings. In the proposed method, the robot estimates if the person seeks to interact with the robot. This is done by analyzing current behavior patterns while comparing to stored experience in a database. Using a potential field centered around the person, the robot then positions itself at the most appropriate place relative to the person and the interaction status. The system is validated through qualitative tests in a real world setting. The results demonstrate that the system is able to learn where to position itself, based on past interaction experiences, and to adapt to different behaviors over time. The method can be used as a robust and adaptive navigation algorithm enabling the robot to navigate among people in an open-ended environment while respecting social space. I. INTRODUCTION The vision of robots participating in our day-to-day lives is a main part of the focus in the research field of Human Robot Interaction (HRI) [5]. The vision is supported by progress in computing, visual recognition, and wireless connectivity, which open the door to a new generation of mobile robotic devices that see, hear, touch, manipulate, and interact with humans [8]. Consider a robot supporting care assistants. At one time of the day, the support may include handing out food. In this case, the robot will interact closely with the care assistants and the persons being assisted. After a while, the persons around the robot will not need its assistance anymore and hence its behavior should be adjusted according to this new situation. For a robot to behave naturally in such situations, it will be necessary for it to learn from experiences and to adapt its behavior to the person’s desire to interact. To incorporate the ability to learn from experiences, re- searchers [13], [16] have investigated Case Based Reasoning (CBR). CBR allows recalling and interpreting past experi- ences, as well as generating new cases to represent knowl- edge from new experiences. To our knowledge, CBR has not yet been used in a human-robot interaction context, but has been proven successful solving spatial-temporal problems in robotics in [14], [16] and [12]. CBR is characterized by its This work was not supported by any organization S. Tranberg, is with the Centre for Robot Technol- ogy, Danish Technological Institute, Odense, Denmark soren.tranberg@teknologisk.dk M. Svenstrup and T. Bak are with the Department of Electronic Sys- tems, Automation & Control, Aalborg University, 9220 Aalborg, Denmark {ms,tba}@es.aau.dk H.J. Andersen is with the Department for Media Technology, Aalborg University, 9220 Aalborg, Denmark hja@cvmt.aau.dk adaptiveness, which means that CBR is constantly updating its model. This means that the method is well suited for implementing an adaptive behaviour on a human interactive robot, as described in the case above. Another advantage of using CBR, is the simplicity of concept of the method and the implementation, as well as the relatively little need of parameter tuning. A person’s willingness to engage in interaction is analyzed the basis of a person’s pose and position. We define a human’s pose as the position and orientation of the body, and infer pose from 2D laser range measurements as explained in [19]. The advantage of the approach, is that it is robust, uses only one sensor and does not require any determination of the persons facial expressions or other gestures. When the persons willingness to interact has been deter- mined, it is used as a basis for human-aware navigation respecting the person’s social spaces as discussed in [21], [6]. Several authors [2], [3], [6], [11] have investigated the willingness of people to engage in interaction with robots that follow different spatial behavior schemes. In this method, navigation is done using potential fields which has shown to be useful for deriving robot motion [18], [17], [7]. The implemented adaptive navigation behavior is described further in detail in [1], [19] The adaptive CBR and navigation methods have been im- plemented and tested in a real world human robot interaction test setup. II. MATERIALS AND METHODS The robot behavior described in this paper is inspired by the spatial relation between humans (proxemics) as outlined in [10]. Hall divides the zone around a person into to four categories according to the distance to the person: • the public zone > 3.6m • the social zone > 1.2m • the personal zone > 0.45m • the intimate zone < 0.45m Social spaces between robots and humans were studied in [20] supporting the use of Hall’s proxemics distances. In order for the robot to be able to position itself in the most appropriate position relative to the person, it should be able to estimate what will be the outcome of the human-robot interaction during run-time, i.e. if the person shows no sign of willingness to interact, the robot should not violate his or hers personal space but seek to the social or public zone. On the other hand, if a person clearly is willing to interact with the robot, the robot should try to enter the personal zone. Therefore, natural human robot interaction relies on an automatic detection of the person’s willingness to interact