CoGNet: Cooperative Graph Neural Networks Peibo Li * , Yixing Yang † , Maurice Pagnucco ‡ , Yang Song § *†‡§ School of Computer Science and Engineering The University of New South Wales Sydney, Australia * peibo.li@student.unsw.edu.au † yixing.yang@unsw.edu.au ‡ morri@cse.unsw.edu.au § yang.song1@unsw.edu.au Abstract—Graph representation learning has received increas- ing attention in recent years for many real-world applications. A major challenge in graph representation learning is the lack of labeled data. To address this challenge, Graph Neural Networks (GNNs) use message passing frameworks to combine information from unlabeled data with labeled data. However, the use of unlabeled data under the message passing framework is indirect in the training process where unlabeled data does not supervise the training process. To fully exploit the potential of unlabeled data, we propose a novel dual-view cooperative training framework for graph data where unlabeled data is involved in the training process for supervision. Specifically, we regard different views as the reasoning processes of two GNN models with which the models make predictions, integrating the understanding of different models on the underlying graph. To exchange information between models, we design a pseudo- label-based approach, where the two models mutually provide pseudo labels to each other iteratively. Moreover, to ensure the quality of pseudo labels, we propose an entropy-based pseudo-labels selection procedure and we adopt GNNExplainer to visualize different views in our framework. Our comprehensive experimental evaluation shows that our methods can boost the performance of state-of-the-art models. I. I NTRODUCTION The recent success of deep learning has boosted the devel- opment of Graph Neural Networks (GNNs) [1]. They have demonstrated their strong performance over various graph representation learning tasks such as citation network analysis [2], recommendation in social networks [3] and drug discovery [4]. Generally, a GNN uses neural message passing in which vector messages are exchanged between nodes and updated via neural networks. The message passing mechanism enables GNNs to gather information from unlabeled nodes. There are several different GNN variants, such as Graph Convolutional Networks (GCNs) [5] and Graph Attention Networks (GATs) [2], where GCN is an efficient variant of Convolutional Neural Networks (CNNs) for graph data and GAT is a GNN enhanced with the self-attention mechanism. Recent studies have developed more complex GNN models to achieve better generalization ability and boost performance. To name a few, GCNII [6] supports a deeper structure by solving the over smoothing problem; GRAND [7] alleviates the overfitting problem by employing data augmentation and consistency regularization strategies. One of the biggest problems in graph representation learning is the lack of labeled data. Because graphs can be of various sizes and complicated with irregular structures, it is difficult to label every single piece of data. Therefore, most of the graph representation learning tasks fall into semi-supervised learning. Different from semi-supervised learning in the image domain, where each image is an independent object, nodes in a graph are connected and interact via edges. Existing GNN models have thus made use of this characteristic by message passing between nodes [2], [5], [8], where the models aggregate information for the labeled nodes from their neigh- borhoods containing both labeled and unlabeled nodes and are trained on labeled nodes. However, under the message passing framework, the unlabeled nodes are only indirectly involved in training by providing other nodes messages. Hence, it is worth developing a method that can fully exploit the potential of unlabeled nodes, or, cohesively involve them in the training process and let them supervise the training. To further improve the capacity of GNN models, some dual- view graph representation learning frameworks were intro- duced for GNNs. For instance, [9] developed a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs. [10] developed a dual- view GNN for molecular property prediction. To enable the model to exploit node and edge information simultaneously, they use the original graph (node-central) and its line graph (edge-central) as two views for training. Generally, the previ- ous methods for dual-view learning over graphs generate views via different data augmentations. In this paper, we propose a Cooperative dual-view Graph Neural Network, named CoGNet, for semi-supervised graph representation learning. A pseudo-label-based cooperative training approach is developed to propagate information be- tween models, where the models mutually provide pseudo labels to each other iteratively. An entropy-based pseudo- label selection procedure is applied to ensure the quality of pseudo labels. Our framework can be understood as using two GNN models to provide different perspectives to the same predictions, which essentially has similar effects to dual- view learning. In contrast to data augmentation-based view generation, the views in our framework represent the reasoning processes of different GNN models with which the models