SUBMITTED TO IEEE SIGNAL PROCESSING LETTERS 1 Reweighted Low-Rank Tensor Decomposition based on t-SVD and its Applications in Video Denoising Baburaj M., Student Member, IEEE Sudhish N. George, Member, IEEE, Abstract—The t-SVD based Tensor Robust Principal Com- ponent Analysis (TRPCA) decomposes low rank multi-linear signal corrupted by gross errors into low multi-rank and sparse component by simultaneously minimizing tensor nuclear norm and l1 norm. But if the multi-rank of the signal is considerably large and/or large amount of noise is present, the performance of TRPCA deteriorates. To overcome this problem, this paper proposes a new efficient iterative reweighted tensor decomposi- tion scheme based on t-SVD which significantly improves tensor multi-rank in TRPCA. Further, the sparse component of the tensor is also recovered by reweighted l1 norm which enhances the accuracy of decomposition. The effectiveness of the proposed method is established by applying it to the video denoising problem and the experimental results reveal that the proposed algorithm outperforms its counterparts. Index Terms—Low-rank Tensor Decomposition, Sparsity En- hancement, Tensor Robust Principal Component Analysis, Video Denoising I. I NTRODUCTION A LL natual multi-linear signals such as images, videos etc., inherently possess a low-rank structure [2], [6], [11], [13]. A corrupted image or video can be recovered with high accuracy by regularizing its rank [2], [6], [10], [11], [13]. The low-rank tensor decomposition problem is defined as decom- posing an observed multi-linear data M which is corrupted by gross errors, into a low-rank component ´ L and sparse component ´ S so that M = ´ L + ´ S . The major challenge in this area is to formulate the tensor rank. Different frameworks of tensor algebra proposed different definitions for tensor rank. CANDECOMP/PARAFAC(CP) [15], [18] model factorizes a tensor into a sum of rank-1 tensors, but it suffers from the degeneracy of solutions. The Tucker model [16] extends the idea of matrix rank into rank-N for an N-dimensional tensor. Motivated by these concepts, Kilmer et al. [8] proposed a new tensor framework based on circulant algebra. They proposed a new tensor-Singular Value Decomposition (t-SVD) based on Fourier transform and defined tensor multi-rank. Based on t-SVD and tensor multi-rank, Zhang et al. [7] introduced Tensor Nuclear Norm (TNN) and demonstrated the video completion capabilities of it. Hu et al. [3] modified the TNN technique as Twist Tensor Nuclear Norm (t-TNN) to improve the tensor completion performance of panning videos. The Tensor Robust Principal Component Analysis (TRPCA) [9], [10] problem is stated as, min L,S ‖L‖ + λ ‖S‖ 1 subject to M = L + S (1) where . is the tensor nuclear norm based on t-SVD and λ is the regularization parameter. Zhang et al. [9] proposed a solution to Eq. (1) and demonstrated the multi-linear data recovery from sparse noise. Lu et al. [10] modified the solution of this algorithm via convex optimization technique. Candes et al. [17] showed remarkable improvement on the sparse recovery or estimation of signals by minimizing a weighted l 1 norm. Inspired by reweighed l 1 minimization for sparsity enhancement [17], Peng et al. implemented a reweighted low-rank matrix recovery [5] and this technique were successfully applied in different image restoration prob- lems. Even though, the above mentioned TRPCA algorithms work well in many low-rank tensor recovery situations, it has limited performance particularly, when the tensor rank is quite large. Decomposition procedure mainly relies on the accuracy in the measurement of rank and sparsity of the tensor. Both these parameters are indirectly measured via nuclear norm and l 1 norm respectively in TRPCA. But, these measurements give only an approximate measure of the parameters, when the intrinsic rank of tensor is considerably large and/or the tensor become corrupted by dense errors. In order to improve the performance of tensor recovery techniques, this paper proposes a reweighting scheme of tensor singular values which is a combination of sparsity enhancement and low rank tensor decomposition techniques. The rest of this paper is organized as follows. Section II gives preliminaries on tensors and notations that will be used throughout this paper. Section III describes proposed reweighted low-rank tensor decomposition technique in detail. Experimental results and analysis are presented in section IV to verify the effectiveness of the proposed method. Concluding remarks are given in section V. II. PRELIMINARIES ON TENSOR AND NOTATIONS This document uses Euler script for denoting tensors e.g. X , bold upper-case letters e.g. M for matrices, bold lower- case letters e.g. v for vectors and lower-case letters for scalars e.g. k. A tensor is a multi-linear structure [2], [4], [6] in R n1×n2×...nN . A vector is first-order tensor, a matrix is second-order tensor and multi-linear data of order three or above are called higher-order tensors. A slice of a tensor in a 2D section defined by all but two indices [4]. MATLAB column notation is used to specify the sub-tensors of a tensor. e.g. For a 3-way tensor X , k th horizontal, lateral and frontal slices are given by X (k, :, :), X (:,k, :) and X (:, :,k) respec- tively. A fiber of a tensor is a 1D section defined by fixing all indices but one. The fibers X (:, i, j ), X (i, :,j ) and X (i, j, :) arXiv:1611.05963v2 [cs.CV] 12 Jan 2017