48 IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | MAY 2006 an adaptive archiving algorithm, suitable for use with any Pareto optimization algorithm, which has various useful properties as follows. It maintains an archive of bounded size, encourages an even distribution of points across the Pareto front, is computationally efficient, and we are able to prove a form of conver- gence. The method proposed here maintains evenness, efficiency, and cardinality, and provably converges under certain conditions but not all. Finally, the notions underlying our convergence proofs support a new way to rigorously define what is meant by “good spread of points” across a Pareto front, in the context of grid-based archiving schemes. This leads to proofs and conjectures applicable to archive sizing and grid sizing in any Pareto optimization algorithm maintaining a grid-based archive. List of Neural Networks Pioneer Awardees— 2006, Donald Specht and Erkki Oja 2005, Carver Mead 2004, Andrew Barto 2003, Kunihiko Fukushima 2002, Terrence Sejnowski 2001, David E. Rumelhart and James L. McCelland 2000, Leon Chua 1999, Robert Hecht-Nielsen 1998, Geoffrey E. Hinton 1997, John J. Hopfield 1995, Michael A. Arbib, Nils J. Nilsson and Paul J. Werbos 1994, Christoph von der Malsburg 1993, Thomas M. Cover 1992, Shun-Ichi Amari, Walter Freeman 1991, Bernard Widrow, Stephen Gross- berg, Teuvo Kohonen List of Fuzzy Systems Pioneer Awardees— 2006, Janusz Kacprzyk 2005, Enric Trillas 2004, Ronald Yager 2003, Ebrahim Mamdani 2002, Didier Dubois and Henri Prade 2001, James Bezdek 2000, Lotfi Zadeh and Michio Sugeno List of Evolutionary Computation Awardees— 2005, Kenneth De Jong 2004, Richard Friedberg 2003, John H. Holland 2002, Ingo Rechenberg and Hans-Paul Schwefel 2001, Michael Conrad 2000, George E.P. Box 1999, Alex S. Fraser 1998, Lawrence J. Fogel List of Meritorious Services Awardees- 2006, Evangelia Micheli-Tzanakou 2005, Piero P. Bonissone 2004, Enrique Rusipini Piero P. Bonissone General Electric Global Research, USA IEEE Fellows—Class of 2006 Fellows Andrew Barto Andrew Barto is Pro- fessor of Computer Science, University of Massachusetts, Amherst. He received his B.S. with distinc- tion in mathematics from the University of Michigan in 1970, and his Ph.D. in Computer Science in 1975, also from the University of Michigan. He joined the Computer Science Department of the University of Massachusetts Am- herst in 1977 as a Postdoctoral Research Associate, became an Associ- ate Professor in 1982, and has been a Full Professor since 1991. He is Co- Director of the Autonomous Learning Laboratory and a core faculty member of the Neuroscience and Behavior Pro- gram of the University of Massachu- setts. His research centers on learning in natural and artificial systems, and he has studied machine learning algo- rithms since 1977, contributing to the development of the computational the- ory and practice of reinforcement learning. His current research centers on models of motor learning and rein- forcement learning methods for real- time planning and control, with specific interest in autonomous mental development through intrinsically motivated reinforcement learning. He currently serves as an associate editor of Neural Computation, as a member of the editorial boards of the Journal of Machine Learning Research, Adaptive Behavior, and Theoretical Computer Science-C: Natural Computing. Pro- fessor Barto is a Fellow of the Ameri- can Association for the Advancement of Science, a Fellow and Senior Mem- ber of the IEEE, and a member of the American Association for Artificial Intelligence and the Society for Neu- roscience. He received the 2004 IEEE Neural Network Society Pioneer Award for contributions to the field of reinforcement learning. He has pub- lished over one hundred papers or chapters in journals, books, and confer- ence and workshop proceedings. He is co-author with Richard Sutton of the book “Reinforcement Learning: An Introduction,” MIT Press 1998, and