International Journal of Intelligent Computing and Cybernetics c Emerald Group Publishing Limited DISTRIBUTED ADAPTIVE SWARM FOR OBSTACLE AVOIDANCE SURANGA HETTIARACHCHI Computer Science Department Indiana University Southeast 4201, Grant Line Road New Albany, Indiana, 47150, USA suhettia@iu.edu http://www.ius.edu/suhettia WILLIAM M. SPEARS Swarmotics, LLC Laramie, WY 82070, USA wspears@swarmotics.com http://www.swarmotics.com Received (06 February 2009) Revised (12 August 2009) Accepted (18 August 2009) Abstract Purpose- This paper demonstrates a novel use of a generalized Lennard-Jones (LJ) force law in Physicomimetics, combined with offline evolutionary learning, for the control of swarms of robots moving through obstacle fields towards a goal. We then extend the paradigm to demonstrate the utility of a real-time online adaptive approach named DAEDALUS. Design/Methodology/Approach- To achieve the best performance, we optimize the parameters of the force law used in our Physicomimetics approach, using an evolutionary algorithm (offline learn- ing). We utilize a weighted fitness function consists of three components: a penalty for collisions, a penalty for lack of swarm cohesion, and a penalty for robots not reaching the goal. We then give each robot of the swarm a slightly mutated copy of the optimized force law rule set found with offline learn- ing and introduce the robots to a more difficult environment. We use our online learning framework (DAEDALUS) for swarm adaptation in this more difficult environment. Findings- The novel use of the generalized Lennard-Jones (LJ) force law combined with an evolu- tionary algorithm surpasses the prior state-of-the-art in the control of swarms of robots moving through obstacle fields. In addition, our DAEDALUS framework allows the swarms of robots to not only learn and share behavioral rules in changing environments (in real time), but also to learn the proper amount of behavioral exploration that is appropriate. Research limitations/implications- There are significant issues that arise with respect to ”wall following methods” and ”local minimum trap” problems. We have observed ”local minimum trap” problems in our work, but we did not address this issue in detail. We intend to explore other approaches to develop more robust adaptive algorithms for online learning. We believe that we can accelerate the learning of the proper amount of behavioral exploration. Practical implications- In order to provide meaningful comparisons, we provide a more complete set of metrics than prior papers in this area. We examine the number of collisions between robots and 1