To appear in: Evolutionary Computation: Theory and Applications, X. Yao (Ed.), World Scientific Publ. Co., Singapore, in press. EVOLUTIONARY COMPUTATION IN BEHAVIOR ENGINEERING TR/IRIDIA/1996-1 IRIDIA Université Libre de Bruxelles Marco Colombetti Progetto di intelligenza artificiale e robotica Dipartimento di elettronica e informazione Politecnico di Milano Milano, Italy colombet@elet.polimi.it Marco Dorigo IRIDIA Université Libre de Bruxelles Bruxelles, Belgium mdorigo@ulb.ac.be Abstract. In the last few years we have used ALECSYS, a parallel learning classifier system based on the genetic algorithm, to develop behavioral modules for mobile robots, both simulated and real. In this paper we briefly report on our experience, and then reflect on various concepts stemming from the application of evolutionary computation to agent building. We propose a definition of agent, analyze the relationships holding between an agent and its external environment, and discuss some important similarities and differences between natural and artificial systems; in particular, we compare the concept of fitness of an organism with that of quality of an artifact. We then concentrate on adaptation, regarded as a basic process for the development of both biological organisms and artificial agents. We carry on our analysis trying to understand where and how Behavior Engineering (i.e., the discipline concerned with the development of artificial agents) might profit from the use of evolutionary strategies. We argue that an evolutionary approach might allow us to search the space of nonrational design, thus opening a whole new world of possibilities for the implementation of artificial systems. 1. Introduction In the late fifties, computer scientists started to work on the project – known as Artificial Intelligence (AI) – of building intelligent computational systems. The basic assumption underlying most work in AI is that intelligence, either natural or artificial, intrinsically is a computational phenomenon, and therefore can be studied in disembodied systems, that is, in systems that have a “mind” but no “body” (with the exception of their computing “brain”). Forty years later, it is widely believed that – in spite of impressive local successes – AI will find it very difficult to reach its ultimate goal. Feeling unable to implement disembodied