A Decomposition/Synchronization Scheme for Formulating and Solving Optimization Problems ⋆ James Anderson * Antonis Papachristodoulou ** * Department of Engineering Science, University of Oxford, OX1 3PJ, U.K. (e-mail: james.anderson@eng.ox.ac.uk) ** Department of Engineering Science, University of Oxford, OX1 3PJ, U.K. (e-mail: antonis@eng.ox.ac.uk) Abstract: Large-scale optimization problems, even when convex, can be challenging to solve directly. Recently, a considerable amount of research has focused on developing methods for solving such optimization problems in a distributed manner. The assumption that is usually made is that the global objective function is a sum of convex functions, which is restrictive. In this paper, we automatically decompose a convex function to be minimized into a sum of smaller functions that may or may not be convex and assign each sub-function to an agent in a networked system. Each agent is allowed to communicate with other agents in order to solve the original optimization problem. We propose an algorithm which will converge when the interaction between the agents is strong enough to lead to synchronization between common variables. Keywords: Distributed Optimization, Synchronization, Lyapunov Stability. 1. INTRODUCTION Understanding how large-scale dynamical systems, also called networked systems function is an important research topic. Typical systems of interest, to name just a few, include Internet resource allocation (Srikant, 2003), agree- ment, alignment and synchronization (Strogatz, 2003), multi-agent system consensus (Jadbabaie et al., 2003), flocking (Olfati-Saber, 2006), sensor networks (Boyd et al., 2006) etc. Such systems contain a large number of agents interacting on a network with a known or unknown topol- ogy. They can all be thought of as trying to optimize a global network objective for which a distributed solution is being sought. It is not usually obvious how to decom- pose a large optimization problem into a sum of smaller subproblems so that if these are assigned to agents they can cooperatively solve the original problem using only locally available information. Frequently it is assumed that the problem has already been decomposed into smaller problems that have certain desirable properties, such as convexity. Methods for solving convex optimization problems in a distributed manner are derived and analyzed in Bertsekas and Tsitsiklis (1989). Recently, Nedi´ c and Ozdaglar (2009) extended the framework of optimizing a sum of convex functions to include cases where the objective function is not necessarily smooth and may be subject to a time- varying topology. They also provide the first convergence analysis for such a system. In Nedi´ c and Ozdaglar (2008) their convergence analysis is extended to deal with systems with communication delays on the information exchange. ⋆ Supported financially by EPSRC. Rantzer (2008) uses dynamic game theory and the concept of team games to show that a decentralized controller can guarantee global convergence towards a Nash Equilibrium. Subsequently Rantzer (2009) uses the Lagrangian dual de- composition approach (Boyd and Vandenberghe, 2004) to derive a distributed optimal controller and provides a proof that guarantees that a decentralized control law always stays within a predetermined distance from optimality. In this paper we assume that the original optimization problem is convex, which we aim to decompose and solve efficiently. Our approach puts more emphasis on the de- composition and thus we make no prior assumptions on the convexity of the subproblems that may emerge from this decomposition. Using notions from algebraic and spec- tral graph theory, we describe a method for decomposing the original objective function into the sum of appropri- ately defined functions (in a smaller number of variables). Smaller optimization problems can thus be defined, which are assigned to a number of agents that are allowed to collaborate/communicate in order to find suboptimal so- lutions to the original optimization problem. This means that the subproblems involve a number of intermediate states and constraints to ensure that agents who hold the same variables communicate to one another their current estimate of these shared variables. We then solve the optimization sub-problems using a sub-gradient descent algorithm and demonstrate that it is possible to achieve convergence using ideas from synchronization theory even when the sub-problems are not convex. The paper is organized as follows: In Section 2 we present background results and tools that will be used in the sequel. In Section 3 we introduce a motivating example