1 Evolving Heterogeneous Neural Agents by Local Selection Filippo Menczer, W. Nick Street, and Melania Degeratu Evolutionary algorithms have been applied to the synthesis of neural architec- tures, but they normally lead to uniform populations. Homogeneous solutions, however, are inadequate for certain applications and models. For these cases, local selection may produce the desired heterogeneity in the evolving neural networks. This chapter describes algorithms based on local selection, and dis- cusses the main differences distinguishing them from standard evolutionary algorithms. The use of local selection to evolve neural networks is illustrated by surveying previous work in three domains (simulations of adaptive behav- ior, realistic ecological models, and browsing information agents), as well as reporting on new results in feature selection for classification. 1.1 Introduction The synthesis of neural architectures has been among the earliest applications of evolutionary computation [60, 1, 50, 13]. Evolutionary algorithms have been used to adjust the weights of neural networks without supervision [51, 46], to design neural architectures [49, 28, 20, 48], and to find learning rules [5]. Evolutionary algorithms, however, typically lead to uniform populations. This was appropriate in the above applications, since some optimal solution was assumed to exist. However, homogeneous solutions — neural or otherwise — are inadequate for certain applications and models, such as those requiring cover [14, 36] or Pareto [58, 24] optimization. Typical examples stem from expensive or multi-criteria fitness functions; in these cases, an evolutionary algorithm can be used to quickly find a set of alternative solutions using a simplified fitness function. Some other method is then charged with comparing these solutions. Selection schemes have emerged as the aspect of evolutionary computation that most directly affects heterogeneity in evolutionary algorithms. In fact, se- lective pressure determines how fast the population converges to a uniform solution. The exploration-exploitation dilemma is commonly invoked to ex- plain the delicate tension between an algorithm’s efficiency and its tendency to prematurely converge to a suboptimal solution. Parallel evolutionary algorithms often impose geographic constraints on evolutionary search to assist in the formation of diverse subpopulations [19, 8]. The motivation is in avoiding the communication overhead imposed by stan- MIT Press Math6X9/1999/09/30:19:43 Page 1