Embedding alignment methods in dynamic networks
Kamil Tagowski
kamil.tagowski@pwr.edu.pl
Department of Computational
Intelligence, Wroclaw University of
Science and Technology
Wrocław, Poland
Piotr Bielak
piotr.bielak@pwr.edu.pl
Department of Computational
Intelligence, Wroclaw University of
Science and Technology
Wrocław, Poland
Tomasz Kajdanowicz
tomasz.kajdanowicz@pwr.edu.pl
Department of Computational
Intelligence, Wroclaw University of
Science and Technology
Wrocław, Poland
ABSTRACT
In recent years, dynamic graph embedding has attracted a lot of
attention due to its usefulness in real-world scenarios. In this paper,
we consider discrete-time dynamic graph representation learning,
where embeddings are computed for each time window, and then
are aggregated to represent the dynamics of a graph. However, in-
dependently computed embeddings in consecutive windows sufer
from the stochastic nature of representation learning algorithms
and are algebraically incomparable. We underline the need for em-
bedding alignment process and provide nine alignment techniques
evaluated on real-world datasets in link prediction and graph recon-
struction tasks. Our experiments show that alignment of Node2vec
embeddings improves the performance of downstream tasks up to
11 pp compared to the not aligned scenario.
CCS CONCEPTS
· Computing methodologies → Learning latent representa-
tions.
KEYWORDS
dynamic graphs, graph embedding, embedding alignment
ACM Reference Format:
Kamil Tagowski, Piotr Bielak, and Tomasz Kajdanowicz. 2021. Embedding
alignment methods in dynamic networks. In Woodstock ’18: ACM Symposium
on Neural Gaze Detection, June 03ś05, 2018, Woodstock, NY . ACM, New York,
NY, USA, 7 pages. https://doi.org/10.1145/1122445.1122456
1 INTRODUCTION
Node representation learning is pervasive across multiple appli-
cations, like social networks [12, 20], spatial networks [23, 24] or
citation networks [8, 20]. The vast majority of node embedding
methods are trained in an unsupervised manner, providing an auto-
mated way of discovering node representations for static networks.
In many real-world scenarios, the network structure evolves and
node embedding depends on such dynamics. However, the body of
knowledge for dynamic graph node embedding methods is rather
unaddressed [3].
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for proft or commercial advantage and that copies bear this notice and the full citation
on the frst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specifc permission and/or a
fee. Request permissions from permissions@acm.org.
Woodstock ’18, June 03ś05, 2018, Woodstock, NY
© 2021 Association for Computing Machinery.
ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00
https://doi.org/10.1145/1122445.1122456
Dynamic graph embedding can be performed in two settings:
continuous and discrete-time. The frst one allows to handle a single
event that triggers updates of node embeddings. The latter setting
that is commonly utilized, involves the aggregation of graph data
into snapshots and computes embeddings for each one of them.
Such snapshot embeddings are further combined into a single node
embedding that captures the whole graph evolution. Unfortunately,
such decomposition of the embedding process sufers from the sto-
chastic nature of representation learning algorithms. Embeddings
of consecutive snapshots are algebraically incomparable due to
the transformations (artifacts) induced by the embedding meth-
ods. Therefore, there exists an research gap of how to deal with
these unwanted transformations. The expected outcome is to map
embeddings from particular snapshots into a common space. This
can be achieved by embedding alignment methods that mitigate
transformations and provide the ability to compare embeddings
along with consecutive snapshots. Performing downstream tasks
on nonaligned node embedding vectors may provide inconclusive
results.
In this paper, we focus on several node embedding alignment
methods that allow fnding unifed representation for nodes in dy-
namic networks using static network embedding approaches (in our
case: node2vec). Based on extensive experiments on several real-
world datasets, we demonstrate that node embedding alignment is
crucial and allows to increase performance up to 11 pp compared
to not aligned embeddings (node2vec). We summarize our contribu-
tions as follows: (1) We propose nine embedding alignment methods
for graph. (2) We provide a comprehensive evaluation showing that
alignment is an indispensable operation in dynamic graph embed-
ding based on a discrete approach, while dealing with node2vec
embeddings. Additionally, in the Appendix B, (3) we formulate
aligner performance measures (AMPs) for evaluating alignment
algorithms, regardless of the downstream tasks.
2 RELATED WORKS
The literature on static node embedding methods is very rich [3].
We can distinguish approaches based on: random-walks (Node2vec
[12], metapath2vec [8]), graph neural networks (GCN [13], GAT
[22]) and matrix factorization (LLE [18], HOPE [16]). Despite be-
ing very powerful concepts, their applicability to dynamic graph
embeddings is very limited. Embedding alignment is a tool that
makes static embedding usable. Indeed, embedding alignment is
crucial in many machine learning areas, e.g., in machine translation
[11], cross-graph alignment [4, 5, 7], dynamic graph embedding
[2, 19, 21]. Embedding alignment techniques are often based on
solving Orthogonal Procrustes problem to obtain a linear transfor-
mation between pairs of embeddings [5]. We can also distinguish