A unifying point of view on expressive power of GNNs Giuseppe Alessio D’Inverno 1,c Monica Bianchini 1 Maria Lucia Sampoli 1 Franco Scarselli 1 June 18, 2021 1 – DIISM - University of Siena - via Roma 56, 53100, Siena, Italy c – Corresponding author, email: dinverno@diism.unisi.it Abstract Graph Neural Networks (GNNs) are a wide class of connectionist mod- els for graph processing. They perform an iterative message passing op- eration on each node and its neighbors, to solve classification/ clustering tasks — on some nodes or on the whole graph — collecting all such mes- sages, regardless of their order. Despite the differences among the various models belonging to this class, most of them adopt the same computation scheme, based on a local aggregation mechanism and, intuitively, the local computation framework is mainly responsible for the expressive power of GNNs. In this paper, we prove that the Weisfeiler–Lehman test induces an equivalence relationship on the graph nodes that exactly corresponds to the unfolding equivalence, defined on the original GNN model. Therefore, the results on the expressive power of the original GNNs can be extended to general GNNs which, under mild conditions, can be proved capable of approximating, in probability and up to any precision, any function on graphs that respects the unfolding equivalence. 1 Introduction Graph processing is becoming pervasive in many application domains, such as in social networks, Web applications, biology and finance. Indeed, graphs can capture high–valued relationships in data that would otherwise be lost. Graph Neural Networks (GNNs) are a class of machine learning models that can process information represented in the form of graphs. In recent years, inter- est in GNNs has grown rapidly and numerous new models and applications have emerged [18]. The first GNN model was introduced in [16]. Later, several other ap- proaches have been proposed including Spectral Networks [5], Gated Graph Sequence Neural Networks [12], Graph Convolutional Neural Networks [10], GraphSAGE [6], Graph attention networks [17], and Graph Networks [3]. 1 arXiv:2106.08992v2 [cs.LG] 17 Jun 2021