Almost Sure Convergence to Consensus in Markovian Random Graphs Ion Matei, Nuno Martins and John S. Baras Abstract— In this paper we discuss the consensus problem for a network of dynamic agents with undirected information flow and random switching topologies. The switching is determined by a Markov chain, each topology corresponding to a state of the Markov chain. We show that in order to achieve consensus almost surely and from any initial state the sets of graphs corresponding to the closed positive recurrent sets of the Markov chain must be jointly connected. The analysis relies on tools from matrix theory, Markovian jump linear systems theory and random processes theory. The distinctive feature of this work is addressing the consensus problem with “Markovian switching” topologies. I. I NTRODUCTION A consensus problem, which lies at the foundation of distributed computing, consists of a group of dynamic agents who seek to agree upon certain quantities of interest by exchanging information among them according to a set of rules. This problem can model many phenomena involving information exchange between agents such as cooperative control of vehicles, formation control, flocking, synchroniza- tion, parallel computing, etc. Thus the consensus problem has been widely studied in the literature. Distributed computation over networks has a long history in control theory starting with the work of Borkar and Varaiya [1], Tsitsikils, Bertsekas and Athans [15], [16] on asynchronous agreement problems and parallel computing. Olfati-Saber and Murray introduced in [11], [12] the theoretical framework for solving consensus problems. Jadbabaie et al. studied in [6] alignment problems involving reaching an agreement. Relevant extensions of the consensus problem were done by Ren and Beard [10] and by Moreau in [8]. The communication networks between agents may change in time due to link failures, packet drops, appear- ance or disappearance of nodes etc. Many of the variations in topology may happen randomly which lead to the consid- eration of consensus problems under a stochastic framework. Hatano and Mesbahi consider in [7] an agreement problem over random information networks where the existence of an information channel between a pair of elements at each time instance is probabilistic and independent of other channels. In [14] Salehi and Jadbabaie provide necessary and sufficient conditions for reaching consensus in the case of a discrete linear system, where the communication flow is given by a graph derived from a random graph process, independent of other time instances. Under a similar model for the network communication topology Porfiri and Stilwell give sufficient conditions for almost sure convergence to consensus in [9]. Ion Matei, Nuno Martins and John S. Baras are with the Institute for Systems Research and the Department of Electrical and Computer Engi- neering, University of Maryland, College Park, imatei, nmartins, baras@umd.edu Modeling the variations in the communication topologies as independent events may prove to be not realistic enough. Consider the example of an agent which may adjust the power allocated to transmissions in order to overcome the failure of a link due to a large distance between agents. The actions of such an agent determines a change in the communication topology which is dependent on the state of the network at previous time instants. In this paper we consider the consensus problem for a group of dynamic agents with undirected information flow and random switching topologies. The switching process is governed by a Markov chain whose states correspond to possible communication topologies. We formulate the necessary and sufficient conditions such that the agents reach almost surely consensus. The outline of the paper is as follows. In Section II we present the setup and formulation of the problem. In Section III we state our main result and give an intuitive explanation. In Section IV we provide first a set of theoretical tools used in proving the main result and then we proceed with the proof of the main theorem. II. PROBLEM FORMULATION In this section we introduce the problem setup for the almost sure convergence to consensus within a discrete-time context. We consider a group of n dynamic agents for which the information flow is modeled as an undirected graph G = (V , E ,A) of order n. The set V = {1,...,n} represents the set of vertices, E ⊆ V ×V is the set of edges and A =[a ij ] is the symmetric unweighted adjacency matrix with a ij being 1 if a link exists between vertices (i, j ) and zero otherwise. By graph Laplacian we understand the matrix L whose entries are: l ij = ∑ n k=1,k=i a ik j = i a ij j = j Equivalently we can write the Laplacian of an undirected graph as L = D A, where A is the adjacency matrix of the graph and D is the diagonal matrix of vertex degrees d ii = i=j a ij . Throughout this paper we will consider (except Section V) undirected, unweighted graphs. Definition 2.1: (Jointly Connected Graphs) Let {G i } s i=1 be a set of undirected graphs. We say that this set is jointly connected if the union of the graphs in the set generates a connected graph where by union we understand the reunion of edges of all graphs in the set (we used the same definition as in [6]). Consider a finite-state Markov chain M (k) which takes values in the discrete set S = {1, ··· ,s} and with an s × s