Decentralized Bayesian Consensus over Networks Volker Willert, Dominik Haumann, Stefan Gering Abstract— This paper deals with networked, dynamical multi-agent systems (MAS) trying to reach consensus about their states subject to uncertain data transfer and noisy measurements. For this, an analogy between the deterministic consensus protocol and a Gaussian process is established. First, the consensus problem is modeled as a stochastic process to con- sider uncertain initial states and noisy information flow over the network. Next, necessary conditions for decentral inference are derived, two decentral approximative inference protocols are developed and the dependency between communication density and approximation error is presented. Furthermore, a provably convergent and computationally efficient Gaussian consensus protocol is realized. Finally, it is shown that taking measurement noise into account the Gaussian consensus protocol naturally extends to a decentralized Kalman filter for consensus systems. I. I NTRODUCTION Decentralized estimation and coordination over large-scale networks of agents imply a lot of problems that are not present in centralized networked systems. Nevertheless, de- centralization is beneficial because of the following prob- lems that occur in networks of central systems [1]: i) All information has to be sent to a central processing unit – if this processing unit fails, no agent in the network is able to estimate or to control. ii) The data transport of all agents to the center leads to congestion in the communication links. iii) Nodes toward the center work more intensely and run out of power much sooner than peripheral nodes, leading to a disconnected or a failed network. The solution is a decentral treatment that leads to the following restrictions [2]: i) There is no center that coordinates each agent. ii) The coordination of each agent is based only on local information. iii) The amount of information exchange between agents and the computational power of the agents is limited. Now, the behavior and performance of an agent is heavily dependent on who is when communicating to whom in which quality [3]. Package loss [5], time delay [6], quantization errors [4] and random transfer of the data [8] has already been considered. Nevertheless, the incorporation of uncer- tain data and its underlying probability distribution has not been discussed in a general, consistent way. Hence, explicit decentral treatment of types of uncertainty raises several unanswered questions. In this paper, we use the consensus problem 1 introduced by DeGroot [11] as an example for cooperative behavior The authors are with the Institute of Automatic Control, Control Theory and Robotics Lab, Technical University of Darmstadt, 64283 Darmstadt, Germany vwillert@rtr.tu-darmstadt.de, dhaumann@rtr.tu-darmstadt.de, sgering@rtr.tu-darmstadt.de 1 Reviews about the history, characteristics, protocols and applications of the consensus problem can be found in [9], [10], [1] in networked MAS based on deterministic, accurate data and generalize it to stochastic, uncertain data. To this end, we extend the classical consensus dynamics to a consensus process. First, we show the link between communication structure and dynamic Bayesian networks (DBNs) and pro- vide necessary conditions for decentral inference and filtering in DBNs. Based on these conditions, we propose extensions to existing approximate inference algorithms, e.g. the Boyen & Koller algorithm [12], such that they can be used for decentral inference over networks. Finally, we provide a provably convergent decentralized Gaussian consensus pro- tocol for the same broad class of graph structures as for the deterministic consensus protocol. This compensates the limitations of the work of Moallemi et al. [13] who derive a distributed Gaussian consensus only for d-regular graphs with d ≥ 2. Their algorithm is tricky because it contains a parameter that has to be chosen properly in order to reach convergence. This is not the case for our new distributed Gaussian consensus protocol. In addition, we present a decentralized Kalman consensus filter that also works for slow communication rates equal to the sensing/observation assimilation rate which is not the case for known consensus filters, like [14]. Interestingly, the proposed distributed filter has several relations to parallel Bayesian filters formulated for computer vision problems, like optical flow [15], [16] and denoising [17]. A. Communication topology The communication topology of a network of agents is represented using a directed graph G =(V , E ). Each agent of a MAS with N agents is represented by a node v n ∈ V (G)= {v 1 , ..., v N }. Each directed edge e nm =(v n ,v m ) ∈ E (G) ⊆V×V represents the possibility to communicate from agent v n to agent v m . All connections are summarized in the adjacency matrix A(G)=[a nm ] with a nm =1 if (v m ,v n ) ∈E (G) and a nm =0 otherwise. The in-degree δ in n = ∑ m6=n a nm defines how many agents communicate to agent v n . The set of all neighboring nodes that send information to v n is called N in n . The set of all neighboring nodes that receive information from v n is denoted N out n with out-degree δ out n = ∑ m6=n a mn . The complete neighborhood of an agent is given by N n = N out n ∪N in n . Then, with D(G)= diag(δ in 1 , ..., δ in N ) define the Laplacian matrix L(G)= D(G)- A(G). A directed graph G is called connected given node v n if there exists a directed path from v n to every node v m ∈V (G) with m 6= n. The number of edges along a directed path is referred to as hop count h(v n ,v m ). There can be several paths between two nodes with equal or different hop counts. The union of all minimal paths of a directed