Int. J. Electron. Commun. (AEÜ) 66 (2012) 107–114
Contents lists available at ScienceDirect
International Journal of Electronics and
Communications (AEÜ)
jou rn al h omepage: www.elsevier.de/aeue
Particle Swarm Optimization with Parameter Dependency Walls and its sample
application to the microstrip-like interconnect line design
O. Tolga Altinoz
a
, Asim Egemen Yilmaz
b,∗
a
Electrical and Electronics Engineering, Hacettepe University Bala Vocational School of Industrial Electronics, Bala, Ankara, Turkey
b
Ankara University, Department of Electronics Engineering, 06100 Tandogan, Ankara, Turkey
a r t i c l e i n f o
Article history:
Received 5 August 2010
Accepted 21 May 2011
Keywords:
Particle Swarm Optimization
Multidimensional optimization
Parameter dependency
Reflecting boundary conditions
Microstrip-like interconnect line
a b s t r a c t
In this paper, we first propose and formulate a novel approach in Particle Swarm Optimization (which we
call “Particle Swarm Optimization with Parameter Dependency Walls”) for handling the dependencies
between the design parameters. After revisiting the definition of Particle Swarm Optimization; we try to
visualize the physical meaning of the concepts lying beneath our approach, and demonstrate the existence
of analytical solution needed throughout the implementation. In order to illustrate a practical application
of our approach, we first revisit the empirical closed-form characteristic impedance expression for the
microstrip-like interconnect line with a ground plane aperture. Then, we apply our approach in order to
calculate the optimized parameters of microstrip-like interconnect lines in the synthesis problem. The
proposed procedure can be useful for the rapid solution of the optimization problems in which there
exist dependencies between/among the input variables.
© 2011 Elsevier GmbH. All rights reserved.
1. Introduction
Daily life frequently forces us to try to find solutions for inverse
problems, for which straightforward solution approaches are usu-
ally inapplicable. Once the problem is expressed as an optimization
problem, namely a minimization/maximization problem repre-
sented via a cost (or penalty)/fitness function together with its
constraints, the next step would be selection and application of an
appropriate method for the solution. Most of the conventional opti-
mization methods make several assumptions about the cost/fitness
function, such as continuity or differentiability. Some methods
also might require a priori information about the behavior of the
function, such as the tendency of its gradient. In practice, such
information quite often does not exist; moreover, in case such
information exists, unfortunately it is already known that the
cost/fitness function has discontinuities and/or non-differentiable
points.
The class of the optimization methods, which is called as “Meta-
heuristics”, is advantageous for such cases. Being nothing but
systematical trial-and-error procedures, these algorithms neither
require a priori information about the cost/fitness functions, nor
make assumptions on them. Being able to compute the value of the
cost/fitness function is sufficient for these algorithms. Moreover, it
∗
Corresponding author. Tel.: +90 312 203 35 00; fax: +90 312 212 54 80.
E-mail addresses: taltinoz@hacettepe.edu.tr (O.T. Altinoz),
aeyilmaz@eng.ankara.edu.tr, asimegemenyilmaz@yahoo.com (A.E. Yilmaz).
is possible and in most cases very easy to handle the constraints of
the problem by means of these algorithms. There are a considerable
number of algorithms of this sort (such as Simulated Annealing,
Tabu Search, Genetic Algorithm, Particle Swarm Optimization, Ant
Colony Optimization, Differential Evolution, etc.) with numerous
variants handling multi-dimensional, single- or multi-objective,
continuous or combinatorial optimization problems. Due to their
nature, most of the algorithms in this class allow parallel imple-
mentations, which make the solution of very large scale problems
possible.
Certainly, due to the random essence and flavor in their very def-
inition, Metaheuristics always carry the risk of getting stuck at local
optima, or not being able to converge to a reasonable solution. On
the other hand, prevention of such situations has sufficiently been
studied for all these algorithms. In other words, when carefully
implemented (i.e. by surveying the literature and considering the
recommendations in the relevant publications), it is quite possible
to achieve almost-excellent results for these algorithms.
Among the Metaheuristics, our particular interest is on Particle
Swarm Optimization in this study. Being a simple but power-
ful method, the algorithm has so far been applied in numerous
problems in various disciplines. Regardless of its general advan-
tages, as will be seen in Section 2, there is another point, which
makes the Particle Swarm Optimization method quite attractive
for us: handling the parameter dependencies in a quite sim-
ple but effective manner. We try to get benefit of this feature
and come up with a more concrete formulation throughout this
study.
1434-8411/$ – see front matter © 2011 Elsevier GmbH. All rights reserved.
doi:10.1016/j.aeue.2011.05.009