Applied Soft Computing 12 (2012) 2828–2839
Contents lists available at SciVerse ScienceDirect
Applied Soft Computing
j ourna l ho me p age: www.elsevier.com/l ocate/asoc
An intuitive distance-based explanation of opposition-based sampling
Shahryar Rahnamayan
a,∗
, G. Gary Wang
b,1
, Mario Ventresca
c,d
a
Faculty of Engineering and Applied Science, University of Ontario Institute of Technology (UOIT), 2000 Simcoe Street North, Oshawa, Ontario, Canada L1H 7K4
b
School of Engineering Science, Simon Fraser University, 250-13450 102 Avenue Surrey, BC, Canada V3T 0A3
c
Center for Pathogen Evolution, Department of Zoology, University of Cambridge, Downing St., Cambridge CB2 3EJ, UK
d
Department of Mechanical and Industrial Engineering, 5 King’s College Road, Toronto, ON, Canada M5S 3G8
a r t i c l e i n f o
Article history:
Received 1 October 2009
Received in revised form 2 November 2011
Accepted 18 March 2012
Available online 30 April 2012
Keywords:
Opposition-based learning
Opposite point
Sampling
Opposition-based optimization
Opposition-based soft computing
a b s t r a c t
The impact of the opposition concept can be observed in many areas around us. This concept has some-
times been called by different names, such as, opposite particles in physics, complement of an event
in probability, absolute or relative complement in set theory, and theses and antitheses in dialectic.
Recently, opposition-based learning (OBL) was proposed and has been utilized in different soft computing
areas. The main idea behind OBL is the simultaneous consideration of a candidate and its corresponding
opposite candidate in order to achieve a better approximation for the current solution. OBL has been
employed to introduce opposition-based optimization, opposition-based reinforcement learning, and
opposition-based neural networks, as some examples among others. This work proposes an Euclidean
distance-to-optimal solution proof that shows intuitively why considering the opposite of a candidate
solution is more beneficial than another random solution. The proposed intuitive view is generalized to
N-dimensional search spaces for black-box problems.
© 2012 Elsevier B.V. All rights reserved.
1. Introduction
Opposition-based learning (OBL) was introduced by Tizhoosh
in 2005 [18]. The main idea behind OBL is the simultaneous con-
sideration of an estimate and its corresponding opposite estimate
(i.e., guess and opposite guess) in order to achieve a better approx-
imation for the current candidate solution. Later, by considering
opposite individuals during opposition-based population initial-
ization and generation jumping, OBL was employed to introduce
opposition-based differential evolution (ODE) [3,4,7,8,14,17]. Com-
parative studies have confirmed that ODE performs better than
DE in terms of convergence speed. A self-adaptive ODE was intro-
duced in [11]. A comprehensive survey of in differential evolution
are provided in [5,6]. By replacing quasi-opposite numbers with
opposite numbers in ODE, quasi-oppositional DE (QODE) [10,12]
was proposed. Both ODE and QODE used a constant generation
jumping rate, variable jumping rates were investigated for ODE
in [13]. A decreasing jumping rate presented better performance
than a fixed one; which means opposition-based generation jump-
ing is more beneficial during exploration than during exploitation.
A self-adaptive ODE with population size reduction was employed
∗
Corresponding author. Tel.: +1 905 721 8668x3843.
E-mail addresses: shahryar.rahnamayan@uoit.ca (S. Rahnamayan),
gary wang@sfu.ca (G.G. Wang), mario.ventresca@utoronto.ca (M. Ventresca).
1
Tel.: +1 778 782 8495.
to tackle large scale problems
2
[37]. As some applications for ODE
among others, ODE with a small population size (Micro-ODE) was
utilized for image thresholding [16]; results confirmed that the
Micro-ODE converges to optimal solution faster than Micro-DE. An
adaptive ODE applied to tuning of a Chess program [39]. Similarly,
by considering opposite states and opposite actions, opposition-
based reinforcement learning (ORL) was proposed [19,20,24–26]
and showed that ORL outperforms its parent algorithm (RL). ORL
was applied to prostate ultrasound image segmentation [33] and
management of water resources [34]. Furthermore, opposition-
based neural networks were introduced by considering opposite
transfer functions and opposite weights [27,28,30]. Opposition-
based simulated annealing (OSA) was proposed based on opposite
neighbors [29]. OSA showed improvement in accuracy and conver-
gence rate over traditional SA. By introducing opposite particles,
Particle Swarm Algorithms were accelerated and opposition-based
PSO was introduced [38,40,45–48]. Opposition-based ant colony
(OACO) algorithms were proposed by introducing opposite(anti)-
pheromone [35,36]. Population-based incremental learning (PBIL)
has also been greatly enhanced by considering opposite samples
[31]. Performance of the harmony search [32] and biogeography-
based optimization [22,23] were improved by OBL. All of these
algorithms have tried to enhance searching or learning in differ-
ent fields of soft computing and they were experimentally verified
2
It uses opposition concept implicitly by changing the sign of F and so searching
in the opposite direction.
1568-4946/$ – see front matter © 2012 Elsevier B.V. All rights reserved.
doi:10.1016/j.asoc.2012.03.034