Towards a Decentralized Architecture for Optimization Marco Biazzini, Mauro Brunato and Alberto Montresor University of Trento Dipartimento di Ingegneria e Scienza dell’Informazione via Sommarive 14, 38100 Trento, Italy {biazzini,brunato,montresor}@disi.unitn.it Abstract We introduce a generic framework for the distributed ex- ecution of combinatorial optimization tasks. Instead of re- lying on custom hardware (like dedicated parallel machines or clusters), our approach exploits, in a peer-to-peer fash- ion, the computing and storage power of existing, off-the- shelf desktops and servers. Contributions of this paper are a description of the generic framework, together with a first instantiation based on particle swarm optimization (PSO). Simulation results are shown, proving the efficacy of our distributed PSO algorithm in optimizing a large number of benchmark functions. 1 Introduction Distributed optimization has a long research history [14]. Most of the previous work assumes the availability of either a dedicated parallel computing facility, or, in the worst case, specialized clusters of networked machines that are coor- dinated in a centralized fashion (master-slave, coordinator- cohort, etc.). While these approaches simplify manage- ment, they normally show severe limitations with respect to scalability and robustness. The goal of our work is to investigate an alternative ap- proach to distributed function optimization. The idea is to adopt recent results in the domain of large-scale decen- tralized systems and peer-to-peer (P2P) systems, where a large collection of loosely-coupled machines cooperate to achieve a common goal. Instead of requiring a specialized infrastructure or a central server, such systems self-organize themselves in a completely decentralized way, avoiding sin- gle points of failure and performance bottlenecks. The ad- vantages of such approach are thus extreme robustness and scalability, and the capability of exploiting existing (unused or underused) resources. The applicative scenario we have in mind is a potentially large organization that owns, or at least controls, several hundreds or even thousands of personal workstations, and wants to exploit their idle periods to perform optimization tasks. In such systems, high level of churn may be expected: nodes may join and leave the system at will, for example when users start or stop to work at their workstations. Such scenario is not unlike a Grid system [12]; a reason- able approach could thus be to collect a pool of independent optimization tasks to be performed, and assign each of them to one of the available nodes, taking care of balancing the load. This can be done either using a centralized scheduler, or using a decentralized approach [4]. An interesting question is whether it is possible to come up with an alternative approach, where a distributed algo- rithm spreads the load of single optimization task among a group of nodes, in a robust, decentralized and scalable way. We can rephrase the question as follows: can we make a better use of our distributed resources by making them co- operate on a single optimization process? Two possible mo- tivations for such approach come to mind: we want to obtain a more accurate result by a specific deadline (focus on qual- ity), or we are allowed to perform a predefined amount of computation over a function and we want to obtain a quick answer (focus on speed). Two opposite techniques could be followed to design a distributed optimization algorithm: Without coordination: exploiting stochasticity Global optimization algorithms are stochastic by na- ture; in particular, the first evaluation is not driven by prior information, so the earliest stages of the search require some random decision. Different runs of the same algorithm can evolve in a very different way, so that parallel independent execution of identical algo- rithms with different random seeds yields a better ex- pected outcome w.r.t. a single execution. With coordination: exploiting communication Some optimization algorithms can be modeled as par-