electronics
Article
Investigating Transfer Learning in Graph Neural Networks
Nishai Kooverjee
1,
* , Steven James
1
and Terence van Zyl
2
Citation: Kooverjee, N.; James, S.;
van Zyl, T. Investigating Transfer
Learning in Graph Neural Networks.
Electronics 2022, 11, 1202. https://
doi.org/10.3390/electronics11081202
Academic Editor: Gemma Piella
Received: 28 January 2022
Accepted: 6 March 2022
Published: 9 April 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1
School of Computer Science and Applied Mathematics, University of the Witwatersrand,
Johannesburg 2000, South Africa; steven.james@wits.ac.za
2
Institute for Intelligent Systems, University of Johannesburg, Johannesburg 2092, South Africa;
tvanzyl@uj.ac.za
* Correspondence: nishai.kooverjee@gmail.com
Abstract: Graph neural networks (GNNs) build on the success of deep learning models by extending
them for use in graph spaces. Transfer learning has proven extremely successful for traditional deep
learning problems, resulting in faster training and improved performance. Despite the increasing
interest in GNNs and their use cases, there is little research on their transferability. This research
demonstrates that transfer learning is effective with GNNs, and describes how source tasks and the
choice of GNN impact the ability to learn generalisable knowledge. We perform experiments using
real-world and synthetic data within the contexts of node classification and graph classification. To
this end, we also provide a general methodology for transfer learning experimentation and present a
novel algorithm for generating synthetic graph classification tasks. We compare the performance of
GCN, GraphSAGE and GIN across both synthetic and real-world datasets. Our results demonstrate
empirically that GNNs with inductive operations yield statistically significantly improved transfer.
Further, we show that similarity in community structure between source and target tasks support
statistically significant improvements in transfer over and above the use of only the node attributes.
Keywords: graph neural networks; machine learning; transfer learning; multi-task learning
1. Introduction and Related Work
Deep learning has achieved success in a wide variety of problems, ranging from time-
series data to images and video [1]. Data from these tasks are referred to as Euclidean [2]
and specialised models such as recurrent and convolutional neural networks [3–5] have
been designed to leverage the properties of such data.
Despite these successes, not all problems are Euclidean. One particular class of such
problems involve graphs, which naturally model complex real-world settings involving
objects and their relationships. Recently, deep learning approaches have been extended to
graph-based domains using graph neural networks (GNNs) [6], which leverage certain
topological structures and properties specific to graphs [2]. Since graphs comprise entities
and the relationships between them, GNNs are said to learn relational information and
may have the capacity for relational reasoning [7].
One reason for the success of deep learning models is their ability to transfer previous
learning to new tasks. In image classification, this transfer leads to more robust models and
faster training [8–13]. Despite the importance of transfer in deep learning, there has been
little insight into the nature of transferring relational knowledge—that is, the representations
learnt by graph neural networks. There is also no comparison of the generalisability of
different GNNs when evaluated on downstream task performance. This lack of insight
is in part due to the lack of a model-agnostic and task-agnostic framework and standard
benchmark datasets and tasks for carrying out transfer learning experiments with GNNs.
Despite transfer learning being useful in traditional deep learning, there has been little
insight gained into the nature of transferring relational knowledge, i.e., the representations
learnt by graph neural networks. There is also no comparison of the generalisability of
different GNNs when evaluated on downstream task performance. This lack is partly
Electronics 2022, 11, 1202. https://doi.org/10.3390/electronics11081202 https://www.mdpi.com/journal/electronics