Using CBR in the Exploration of Unknown Environments with an Autonomous Agent Luís Macedo 1,2 , Amílcar Cardoso 2 1 Department of Informatics and Systems Engineering, Engineering Institute, Coimbra Poly- technic Institute, 3030-199 Coimbra, Portugal lmacedo@isec.pt http://www2.isec.pt/~lmacedo 2 Centre for Informatics and Systems of the University of Coimbra, Department of Informat- ics, Polo II, 3030 Coimbra, Portugal {lmacedo, amilcar}@dei.uc.pt Abstract. Exploration involves selecting and executing sequences of actions so that the knowledge of the environments is acquired. In this paper we address the problem of exploring unknown, dynamic environments populated with both static and non-static entities (objects and agents) by an autonomous agent. The agent has a case-base of entities and another of plans. This case-base of plans is used for a case-based generation of goals and plans for visiting the unknown entities or regions of the environment. The case-base of entities is used for a case-based generation of expectations for missing information in the agent’s perception. Both case-bases are continuously updated: the case-base of entities is updated as new entities are perceived or visited, while the case-base of plans is updated as new sequences of actions for visiting entities/regions are executed successfully. We present and discuss the results of an experiment conducted in a simulated environment in order to evaluate the role of the size of the case- base of entities on the performance of exploration. 1 Introduction Exploration may be defined as the process of selecting and executing actions so that the maximal knowledge of the environment is acquired at the minimum cost (e.g.: minimum time and/or power) [38]. The result is the acquisition of models of the physical environment. There are several applications like planetary exploration [4, 16], rescue, mowing [18], cleaning [12, 36], etc. Strategies that minimize the cost and maximize knowledge acquisition have been pursued (e.g., [2, 3, 10, 22, 25, 35, 38- 41]). These strategies have been grouped into two main categories: undirected and directed exploration [38]. Strategies belonging to the former group (e.g., random walk exploration, Boltzman distributed exploration) use no exploration-specific knowledge and ensure exploration by merging randomness into action selection. On the other hand, strategies belonging to the latter group rely heavily on exploration