1266 IEEE SIGNAL PROCESSING LETTERS, VOL. 25, NO. 8, AUGUST 2018
A Computationally Efficient Tensor Completion
Algorithm
Ioannis C. Tsaknakis , Paris V. Giampouras , Athanasios A. Rontogiannis, and Konstantinos D. Koutroumbas
Abstract—We introduce a tensor completion algorithm that uses
a group-sparse regularizer with respect to the PARAFAC factors
and is based on an optimization scheme that alternatingly mini-
mizes a quadratic upper bound of the associated cost function. The
proposed scheme allows matrixwise updates of the PARAFAC fac-
tors and, thus, leads to an efficient and scalable iterative algorithm,
suitable for big-data applications. Experiments conducted on both
synthetic and real data, corroborate the superior performance, in
terms of runtime, of the proposed algorithm as compared with the
other state-of-the-art approaches.
Index Terms—BSUM framework, group-sparse regularization,
PARAFAC decomposition, tensor completion.
I. INTRODUCTION
R
ECENTLY, high-dimensional data generated by a wealth
of machine-learning applications are naturally represented
by a multidimensional array, commonly named as a tensor [1].
Such applications include recommendation systems [2], [3],
radar-signal processing [4], video processing [5], [6] and topic
models [7]. Furthermore, the context in which we are called to
perform relevant tasks is often characterized by huge volumes of
data suffering from corruptions and missing elements. The pro-
cessing and extraction of information from highly incomplete
and large-scale data sets can be achieved with the development
of scalable and computationally efficient tensor completion al-
gorithms that exploit the low-rank inherent in big tensor data.
The first approach for performing tensor completion entails
an optimization task that uses rank as a regularizer. However,
the computation of the tensor rank is an NP-hard problem [8]
and, therefore, this approach does not offer a feasible path. Even
so, there are other ways for recovering a low-rank tensor, such
as matricizing the tensor and applying matrix-completion algo-
rithms [9], [10] or defining a tensor nuclear norm and formulat-
ing the respective regularized optimization task [11]. Recently,
there has been a line of work that uses suitable rank-penalization
regularizers based on the widespread parallel factor analysis
(PARAFAC) decomposition of tensors [12], [13].
Manuscript received April 2, 2018; revised June 12, 2018; accepted June
27, 2018. Date of publication July 2, 2018; date of current version July 13,
2018. The associate editor coordinating the review of this manuscript and ap-
proving it for publication was Prof. Joao Paulo Papa. (Corresponding author:
Ioannis C. Tsaknakis.)
The authors are with the Institute for Astronomy, Astrophysics, Space
Applications and Remote Sensing, National Observatory of Athens, Penteli
15236, Greece (e-mail:, i.tsaknak@gmail.com; parisg@noa.gr; tronto@noa.gr;
koutroum@noa.gr).
Digital Object Identifier 10.1109/LSP.2018.2852490
In this letter, we propose a novel tensor completion algo-
rithm based on the PARAFAC decomposition. Specifically, in-
spired by [14], the formulation proposed in [13] is now suitably
modified giving rise to an efficient alternating minimization
algorithm. The proposed framework entails a suitably chosen
quadratic approximation of the initial cost function and leads to
an iterative scheme that enables matrixwise updates (in contrast
to the computationally expensive column-/row-wise updates) of
the PARAFAC factor matrices at each iteration. To the best of
our knowledge, this possibility appears for the first time in the
relevant literature and results in an efficient and scalable algo-
rithm capable of handling instances that arise in modern big-
data applications. Furthermore, the mechanism with which low
rankness is imposed allows us to implement a column pruning
(CP) procedure that dynamically erases factor matrix columns
as they become approximately zero. We perform experiments
that illustrate the superior performance of the new algorithm in
terms of runtime, without sacrificing the accuracy, as compared
with the state-of-the-art tensor completion algorithms.
Notation: We denote a vector, a matrix, and a tensor with the
symbols x, X, and X, respectively. Also, we use the symbols
∗, ⊚, ⊗, ⊙, and vec {·} for the Hadamard product, outer product,
Kronecker product, Khatri–Rao product, and row-vectorization
operation, respectively. We use the standard notation for vec-
tor and matrix norms, i.e., ‖·‖
F
for the Frobenius norm, and
‖a‖
p
=(
∑
n
i =1
|a
i
|
p
)
1/p
for the vector l
p
norm (p> 0), where
a =[a
1
,a
2
,...,a
n
]
T
. Moreover, we use the matrix l
1, 2
group
sparse norm, i.e., ‖A‖
1, 2
=
∑
n
i =1
‖a
i
‖
2
, where a
i
, 1 ≤ i ≤ n
are the columns of A ∈ R
m ×n
.
Preliminaries: We can decompose a tensor X ∈ R
I ×J ×K
into
a sum X =
∑
R
r =1
a
r
⊚ b
r
⊚ c
r
or equivalently X (i, j, k)=
∑
R
r =1
a
r
(i)b
r
(j )c
r
(k), where a
r
∈ R
I
, b
r
∈ R
J
, and c
r
∈
R
K
. This is called the PARAFAC decomposition of X . The
minimum number of terms in the PARAFAC decomposition
with which we can express a given tensor is the PARAFAC rank
or simply rank of that tensor. Moreover, we can organize the
vectors
a
r
∈ R
I
R
r =1
,
b
r
∈ R
J
R
r =1
, and
c
r
∈ R
K
R
r =1
as columns of the matrices A ∈ R
I ×R
, B ∈ R
J ×R
, and C ∈
R
K ×R
, respectively and denote the PARAFAC decomposition,
for convenience, as X = [[A, B, C]].
One notion that will be proven to be very useful is matri-
cization or tensor unfolding. There are three standard ways to
unfold a 3-way tensor X depending on the way we position
the slabs of the tensor. We can express these matricizations in
terms of the PARAFAC factor matrices as X
(1)
=(C ⊙ B)A
T
,
X
(2)
=(C ⊙ A)B
T
, and X
(3)
=(B ⊙ A)C
T
.
1070-9908 © 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.