A Hybrid Particle Swarm Algorithm with Cauchy Mutation Hui Wang School of Computer Science China University of Geosciences Wuhan, 430074 China wanghui_cug@yahoo.com.cn Changhe Li School of Computer Science China University of Geosciences Wuhan, 430074 China lchwfx@yahoo.com.cn Yong Liu University of Aizu Tsuruga, Ikki-machi, Aizu-Wakamatsu Fukushima 965-8580 Japan yliu@u-aizu.ac.jp Sanyou Zeng School of Computer Science China University of Geosciences Wuhan, 430074 China sanyou-zeng@263.net Abstract— Particle Swarm Optimization (PSO) has shown its fast search speed in many complicated optimization and search problems. However, PSO could often easily fall into local optima because the particles could quickly get closer to the best particle. At such situations, the best particle could hardly be improved. This paper proposes a new hybrid PSO (HPSO) to solve this problem by adding a Cauchy mutation on the best particle so that the mutated best particle could lead all the rest of particles to the better positions. Experimental results on many well-known benchmark optimization problems have shown that HPSO could successfully deal with those difficult multimodal functions while maintaining fast search speed on those simple unimodal functions in the function optimization. ĉ. INTRODUCTION Particle Swarm Optimization (PSO) was firstly introduced by Kennedy and Eberhart in 1995 [1]. It is a simple evolutionary algorithm which differs from other evolutionary algorithms in which it is motivated form the simulation of social behavior. PSO has shown good performance in finding good solutions to optimization problems [2], and turned out to be another powerful tool besides other evolutionary algorithms such as genetic algorithms [3]. Like other evolutionary algorithms, PSO is also a population-based search algorithm and starts with an initial population of randomly generated solutions called particles [4]. Each particle in PSO has a position and a velocity. PSO remembers both the best position found by all particles and the best positions found by each particle in the search process. For a search problem in an n-dimensional space, a potential solution is represented by a particle that adjusts its position and velocity according to Eqs. (1) and (2): ( 1) () () * * 1() * ( 1 () * 2() * ( ) 2 t t V w V c rand P X i i i t c rand P X g i + = + − + − ) t i t + (1) ( 1) () ( 1) t t X X V i i i + = + (2) where X i and V i are the position and velocity of particle i, P i and P g are previous best particle for the ith particle and the global best particle found by all particles so far respectively, and w is an inertia factor proposed by Shi and Eberhart [5], and rand1() and rand2() are two random numbers independently generated within the range of [0,1], and c 1 and c 2 are two learning factors which control the influence of the social and cognitive components. One problem found in the standard PSO is that it could easily fall into local optima in many optimization problems. Some research has been done to tackle this problem [6-8]. One reason for PSO to converge to local optima is that particles in PSO can quickly converge to the best position once the best position has no change in a local optimum. When all particles become similar, there is little hope to find a better position to replace the best position found so far. In this paper, a new hybrid PSO (HPSO) is proposed. HPSO uses an idea from fast evolutionary programming (FEP)[9] to mutate the best position by Cauchy mutation. It is to hope that the long jump from Cauchy mutation could get the best position out of the local optima where it has fallen. HPSO has been tested on both unimodal and multi-modal function optimization problems. Comparison has been conducted between HPSO and another improved PSO called FDR-PSO [10]. HPSO has also been compared to other evolutionary algorithms, such as classical EP (CEP) and FEP [9]. The rest of the paper is organized as follows: Section 2 describes the new HPSO algorithm. Section 3 lists benchmark functions used in the experiments, and gives the experimental settings. Section 4 presents and discusses the experimental results. Finally, Section 5 concludes with a summary and a few remarks. Ċ. HPSO ALGORITHM Some theoretical results have shown that the particle in PSO will oscillate between their pervious best particle and the global best particle found by all particles so far before it converges [11-12]. If the searching neighbors of the global best particle would be added in each generation, it would