Cooperative Learning Using Advice Exchange Luís Nunes 1,2 , Eugénio Oliveira 1 1 Laboratório de Inteligência Artificial e Ciência de Computadores (LIACC) – Núcleo de Inte- ligência Artificial Distribuída e Robótica (NIAD&R), Faculdade de Engenharia da Universidade do Porto (FEUP),Av. Dr. Roberto Frias 4200-465, Porto, Portugal. 2 Instituto Superior de Ciências do Trabalho e da Empresa (ISCTE), Av. Forças Armadas, Edíficio ISCTE, 1649-026, Lisboa, Portugal eco@fe.up.pt, Luis.Nunes@iscte.pt Abstract. One of the main questions concerning learning in a Multi-Agent Sys- tem’s environment is: “(How) can agents benefit from mutual interaction during the learning process?” This paper describes a technique that enables a heteroge- neous group of Learning Agents (LAs) to improve its learning performance by exchanging advice. This technique uses supervised learning (backpropagation), where the desired response is not given by the environment but is based on ad- vice given by peers with better performance score. The LAs are facing problems with similar structure, in environments where only reinforcement information is available. Each LA applies a different, well known, learning technique. The problem used for the evaluation of LAs performance is a simplified traffic- control simulation. In this paper the reader can find a summarized description of the traffic simulation and Learning Agents (focused on the advice-exchange mechanism), a discussion of the first results obtained and suggested techniques to overcome the problems that have been observed. 1 Introduction The objective of this work is to contribute to give a credible answer to the following question: “(How) can agents benefit from mutual interaction during the learning proc- ess, in order to achieve better individual and overall system performances?”. The objects of study are the interactions between the Learning Agents (hereafter referred as agents for the sake of simplicity) and the effects these interactions have on individual and global learning processes. Interactions that affect the learning process can take several forms, in Multi-Agent Systems (MAS). These different forms of inter- action can range from the indirect effects of other agents’ actions (whether they are cooperative or competitive), to direct communication of complex knowledge structures, as well as cooperative negotiation of a search policy or solution. The most promising way in which cooperative learning agents can benefit from in- teraction seems to be by exchanging (or sharing) information regarding the learning