Is Selection Optimal
for Scale-Free Small
Worlds?
Zs. Palotai
a
Cs. Farkas
b
A. Lőrincz
a
a
Department of Information Systems, Eötvös Loránd University, Budapest, Hungary;
b
Department of Computer Science and Engineering, University of South Carolina,
Columbia, S.C., USA
A. Lőrincz
Department of Information Systems, Eötvös Loránd University
Pázmány Péter sétány 1/c, HU–1117 Budapest (Hungary)
Tel. +36 1 209 0555/8473, Fax +36 1 381 2140,
E-Mail andras.lorincz@elte.hu
© 2006 S. Karger AG, Basel
1424–8492/06/0033–0158
$23.50/0
Complexus 2006;3:158–168
NETWORK MODELLING
Key Words
Scale-free small world No free lunch theorem Internet
Abstract
The ‘no free lunch theorem’ claims that for the set of all problems no algorithm
performs better than random search and, thus, selection can be advantageous
only on a limited set of problems. In this paper we investigate how the topo-
logical structure of the environment influences algorithmic efficiency. We study
the performance of algorithms, using selective learning, reinforcement learning,
and their combinations, in random, scale-free, and scale-free small world (SFSW)
environments. The learning problem is to search for novel, not-yet-found infor-
mation. We ran our experiments on a large news site and on its downloaded
portion. Controlled experiments were performed on this downloaded portion:
we modified the topology, but preserved the publication time of the news. Our
empirical results show that the selective learning is the most efficient in SFSW
topology. In non-small world topologies, however, the combination of the se-
lective and reinforcement learning algorithms performs the best.
Copyright © 2006 S. Karger AG, Basel
Published online: August 25, 2006
DOI: 10.1159/000094197
Fax +41 61 306 12 34
E-Mail karger@karger.ch
www.karger.com
Accessible online at:
www.karger.com/cpu
Simplexus
A free lunch after all?
Developers of web search engines and
data mining tools expend vast sums at-
tempting to find the most efficient ways for
users to search. Mighty algorithms travail
indexes with great speed and seemingly
great efficacy, narrowing a key word search
to a ‘short’ list of results within seconds.
However, the ‘no free lunch theorem’ holds
that there simply is no algorithm that can
perform better than a random selection on
the set of all problems. In other words, no
matter how hard those developers try they
will simply never beat a randomly selected
set of ‘results’.
In the present paper, Palotai, Farkas,
and Lőrincz seek to understand how the
topological structure of the environment
influences algorithmic efficiency and
whether or not there might be a ‘free’ lunch,
after all. They compare the performance of
algorithms in unearthing news from a
large news website, using basic learning
techniques – selective learning, reinforce-
ment learning (RL), and their combina-
tions, in random, scale-free, and scale-free
small world (SFSW) environments. Their
empirical results suggest that selective
learning is the most efficient in SFSW to-
pology, but in non-small world topologies,
combining selective and RL algorithms
gave the best results.
Evolving systems, both natural and ar-
tificial (like the Web), exhibit scale-free or
SFSW properties. Previous researchers
have shown that there is no performance
difference between optimization or search
algorithms if the algorithms are tested on
every possible problem. This implies that
differences in the performance of specific
algorithms are simply a result of specific
properties of the problem being looked at.
By uncovering these properties, it should
then be possible to develop optimized
search approaches, despite the no free
lunch theorem. The structure of a database
or index is an important property and oth-
ers have already demonstrated that an evo-
Downloaded by:
54.237.89.119 - 5/7/2017 7:59:00 PM