Life Science Journal 2014;11(10) http://www.lifesciencesite.com 292 Laplace Mutated Particle Swarm Optimization (LMPSO) Muhammad Imran 1 , Rathiah Hashim 1 and Noor Eliza Abd Khalid 2 1 University Tun Hussein Onn Malaysia86400 Parit Raja, Batu Pahat Johor Malaysia 2 Universiti Teknologi MARA, Malysia 1 malikimran110@gmail.com , 1 radhiah@uthm.edu.my , 2 elaiza@tmsk.uitm.edu.my Abstract: Particle Swarm Optimization (PSO) algorithm has shown good performance in many optimization problems. However, it can be stuck into local minima. To prevent the problem of early convergence into a local minimum, various researchers have proposed some variants of PSO. In this research different variants of PSO are reviewed that have been proposed by different researchers for function optimization problem and one new variant of PSO is proposed using Laplace distribution named as LMPSO. The performance of LMPSO is compared with existing variants of PSO proposed for function optimization. The analysis in this research shows the effect of different mutation operator on Particle Swarm Optimization (PSO). To validate the LMPSO, experiments are performed on 22 benchmark functions. The result shows that the LMPSO achieved better performance as compared to previous PSO varients. [Imran M, Hashim R, Khalid NEA. Laplace Mutated Particle Swarm Optimization (LMPSO). Life Sci J 2014;11(10):292-299] (ISSN:1097-8135). http://www.lifesciencesite.com . 42 Keywords: PSO, Mutation, Laplace, function optimization, PSO variants 1. Introduction PSO is a population based optimization method proposed by Kennedy and Eberhart [1] . The algorithm simulates the behaviour of bird flock flying together in multi dimensional space in search of some optimum place, adjusting their movements and distances for better search [1]. PSO is very similar to evolutionary computation such as Genetic algorithm (GA). The swarms are randomly initialized and then search for an optimum solution by updating generations [1]. PSO is a combination of two approaches, one is cognition model that is based on self expression and the other is a social model, which incorporates the expressions of neighbours. The algorithm mimics a particle flying in the search space and moving towards the global optimum solution. A particle in PSO can be defined as P ∈ [a, b] where i=1, 2, 3…. D and a, b ∈R, D is for dimensions and R is for real numbers [2]. All the particles are initialized with random positions and with random velocities [1], then particles move towards the new position based on their own experience and with neighbourhood experience. Each particle in PSO maintains two important positions called p best and g best where p best is the particle’s own best position and g best is the global best position among all the particles. The velocity and position of each particle are updated by equation (1) and (2). V i (t+1) = V i (t) + c 1 *r 1 * (p best – n i (b)) + c 2 *r 2 * (g best – x i (t)) …………………………… (1) X i (t + 1) = x i (t) + v i (t + 1)………… (2) where x i is the position, v i is the velocity and P best is the personal best position and g best is the global best position for PSO. In this equation r 1 and r 2 are two random numbers ranges from (0,1) and c 1 and c 2 are learning factors specifically the cognition and cognition component influential respectively. 2. PSO Variants J. Kennedy and R. Eberhart proposed PSO in 1995. Despite the successful implementation of PSO for the purpose of optimization one of the problem with PSO was to stuck in local minima, to fix this dilemma number of variants of PSO variants have been proposed by researchers with respect to different parameters and operators. Following section discuss in detail about the variants of PSO with respect to variant type. 2.1. Initialization Initialization of population plays an important role in the evolutionary and swarm based algorithms. In case of inappropriate initialization, the algorithm may search in unwanted areas and may be unable to search for the optimal solution. Nguyen et al [3] inspect the some randomized low discrepancy sequence to initialize the swarm to increase the performance of PSO. They used three low discrepancy sequence Halton, Faur and Sobol. Halton sequence is actually the extension of van der Corput. Ven Der Corput sequence is one dimensional in order to cover search space in N dimensions and Halton is defined as one of the extension of Vender Corput sequence.