Applied Soft Computing 28 (2015) 345–359
Contents lists available at ScienceDirect
Applied Soft Computing
j ourna l h o mepage: www.elsevier.com/locate/asoc
Non-parametric particle swarm optimization for global optimization
Zahra Beheshti, Siti Mariyam Shamsuddin
∗
UTM Big Data Centre, Universiti Teknologi Malaysia, Skudai, 81310 Johor, Malaysia
a r t i c l e i n f o
Article history:
Received 30 March 2014
Received in revised form 15 October 2014
Accepted 9 December 2014
Available online 18 December 2014
Keywords:
Optimization problems
Particle swarm optimization
Global and local optimum
Non-parametric particle swarm
optimization
a b s t r a c t
In recent years, particle swarm optimization (PSO) has extensively applied in various optimization prob-
lems because of its simple structure. Although the PSO may find local optima or exhibit slow convergence
speed when solving complex multimodal problems. Also, the algorithm requires setting several param-
eters, and tuning the parameters is a challenging for some optimization problems. To address these
issues, an improved PSO scheme is proposed in this study. The algorithm, called non-parametric particle
swarm optimization (NP-PSO) enhances the global exploration and the local exploitation in PSO with-
out tuning any algorithmic parameter. NP-PSO combines local and global topologies with two quadratic
interpolation operations to increase the search ability. Nineteen (19) unimodal and multimodal nonlin-
ear benchmark functions are selected to compare the performance of NP-PSO with several well-known
PSO algorithms. The experimental results showed that the proposed method considerably enhances the
efficiency of PSO algorithm in terms of solution accuracy, convergence speed, global optimality, and
algorithm reliability.
© 2014 Elsevier B.V. All rights reserved.
1. Introduction
PSO [1] is a population-based algorithm inspired by the social
behavior of bird flocking or fish schooling. In the algorithm, a mem-
ber in the swarm, particle, represents a potential solution which is
a point in the search space. The global optimum is regarded as the
location of food. Each particle adjusts its flying direction accord-
ing to the best experiences obtained by itself and the swarm in the
solution space. The algorithm has a simple concept and is easy to
implement. Hence, it has received much more attention to solve
real-world optimization problems [2–7], nevertheless, PSO may
easily get trapped in local optima and shows a slow convergence
rate when solving the complex and high dimensional multimodal
objective functions [8].
A number of variant PSO algorithms have been proposed in
the literature to overcome the problems. The algorithms have
improved the performance of PSO in different ways using various
types of topologies, selecting parameters, combining with other
search techniques and so on.
A local (ring) topological structure PSO (LPSO) [9] and Von Neu-
mann topological structure PSO (VPSO) [10] were proposed by
Kennedy and Mendes to avoid trapping into local optima. Accord-
ing to Kennedy [9,11], PSO with a small neighborhood might have
∗
Corresponding author. Tel.: +60123710679.
E-mail addresses: bzahra2@live.utm.my (Z. Beheshti), mariyam@utm.my
(S.M. Shamsuddin).
a better performance on complex problems, while PSO with a large
neighborhood would perform better on simple problems. Sugan-
than [12] applied a dynamically adjusted neighborhood where the
neighborhood of a particle gradually increases until it includes
all particles. Dynamic multi-swarm PSO (DMS-PSO) [13] was sug-
gested by Liang and Suganthan where the neighborhood of a
particle gradually increases until it includes all particles. Hu and
Eberhart [14] applied a dynamic neighborhood where m nearest
particles in the performance space is chosen to be its new neigh-
borhood in each generation. Mendes et al. [15] presented the fully
informed particle swarm (FIPS) algorithm that uses the informa-
tion of entire neighborhood to guide the particles for finding the
best solution. Parsopoulos and Vrahatis combined the global and
local versions together to form the unified particle swarm opti-
mizer (UPSO) [16]. Gao et al. [17] used PSO with a stochastic search
technique and chaotic opposition-based population initialization to
solve complex multimodal problems. The algorithm, CSPSO, finds
new solutions in the neighborhoods of the previous best positions
to escape from local optima.
The fitness-distance-ratio-based PSO (FDR-PSO) was introduced
by Peram et al. [18]. In the algorithm, each particle moves toward
nearby particle with higher fitness value. Liang et al. [8] developed
comprehensive learning particle swarm optimization (CLPSO) that
focused on avoiding the local optima by encouraging each particle
to learn its behavior from other particles on different dimensions.
In another research, a selection operator was firstly proposed
for PSO by Angeline [19]. Other researchers applied apart from
crossover [20], and mutation [21] operations from GA into PSO. An
http://dx.doi.org/10.1016/j.asoc.2014.12.015
1568-4946/© 2014 Elsevier B.V. All rights reserved.