Evolutionary Multi-Objective Optimization of Particle Swarm Optimizers Christian Veenhuis 1 , Mario K¨ oppen 2 , Raul Vicente-Garcia 1 1 Fraunhofer IPK, Pascalstr. 8-9, 10587 Berlin, Germany christian.veenhuis,raul.vicente @ipk.fhg.de 2 Kyushu Institute of Technology, 680-4, Kawazu, Iizuka, Fukuoka 820-8502, Japan mkoeppen@pluto.ai.kyutech.ac.jp Abstract— One issue in applying Particle Swarm Optimization (PSO) is to find a good working set of parameters. The standard settings often work sufficiently but don’t exhaust the possibilities of PSO. Furthermore, a trade-off between accuracy and com- putation time is of interest for complex evaluation functions. This paper presents results for using an EMO approach to optimize PSO parameters as well as to find a set of trade-offs between mean fitness and swarm size. It is applied to four typical benchmark functions known from literature. The results indicate that using an EMO approach simplifies the decision process of choosing a parameter set for a given problem. I. I NTRODUCTION In recent years a swarm-based optimization methodology called Particle Swarm Optimization (PSO) has developed. PSO is very explorative and primarily used in function optimiza- tion. To apply PSO you have to specify several parameters. Although there are standard settings for the parameters which work quite sufficiently for most applications, it is of interest to optimize these parameters to get better and maybe faster results. Additionally, for more complex applications, it may be neccessary to find a trade-off between the quality of the solution and swarm size. The greater the swarm size, the better the solutions, but also the higher the needed computation time. Several researchers recommend to optimize these PSO pa- rameters by using PSO itself [12] [13]. In this regard it is usual to characterize a method as ’meta’. Thus, a PSO algorithm optimizing or learning an other PSO algorithm for a given problem is called Meta-PSO. Although the idea of a Meta-PSO exists there is not much work in that area. This might relate to the fact that a Meta- PSO needs a huge amount of computation time which makes experimentation and parameterization difficult. Nevertheless, in [11] Meissner et al. present an Optimized-PSO being capable of optimizing PSO parameters. They applied it to neural network training and optimized five parameters of their PSO implementation: start and end of decreasing the inertia weight, both weights of the own and best neighbors experience and V max . Matekovits et al. present in [10] a modified PSO they call Meta-PSO. Their Meta-PSO is a PSO with multiple swarms in parallel, i.e., a swarm of sub-swarms. But each sub-swarm has the same configuration, because they intended to improve the exploration abilities and not to optimize PSO itself. The concept proposed in this paper uses one of the Evolutionary Multi-objective Optimization (EMO) algorithms (namely FPD-GA) to optimize the PSO configuration. The optimized objectives are the mean fitness (F mean ), the standard deviation of fitness (F SD ) and the swarm size (F size ) forming together the fitness vector of an individual. An individual encodes all important parameters of a PSO as well as its kind of neighborhood topology with its corresponding parameter. During evaluation the encoded parameters are normalized and mapped to intervals for the corresponding PSO parameters. This paper is organized as follows. Section II introduces the Particle Swarm Optimization algorithm. After a brief introduction to EMO, the FPD-GA algorithm is explained in section III. In section IV the conducted experiments with some results are presented. Finally, in section V some conclusions are drawn. II. PARTICLE SWARM OPTIMIZATION Particle Swarm Optimization (PSO), as introduced by Kennedy and Eberhart [5] [7], is an optimization algorithm based on swarm theory. The main idea is to model the flocking of birds flying around a peak in a landscape. In PSO the birds are substituted by particles and the peak in the landscape is the peak of a fitness function. The particles are flying through the search space forming flocks around peaks of fitness functions. Let N dim be the dimension of the problem (i.e., the di- mension of the search space N dim ), N part the number of particles and P the set of particles P P 1 P N part . Each particle P i x i v i l i has a current position in the search space (x i N dim ), a velocity (v i N dim ) and the locally best found position in history, i.e., the own experience (l i N dim ) of this particle. In PSO, the set of particles P is initialized at time step t 0 with randomly created particles P 0 i . The initial l i are set to the corresponding initial x i . Then, for each time step t , the next position x t 1 i and velocity v t 1 i of each particle P t i are computed as shown in Eqns. (1) and (2).