An algorithm for computing the Analytic Singular Value Decomposition Drahoslava Janovsk´ a, Vladim´ ır Janovsk´ y, and Kunio Tanabe Abstract—A proof of convergence of a new continuation algorithm for computing the Analytic SVD for a large sparse parameter– dependent matrix is given. The algorithm itself was developed and numerically tested in [5]. Keywords—Analytic Singular Value Decomposition, large sparse parameter–dependent matrices, continuation algorithm of a predictor- corrector type. I. I NTRODUCTION A singular value decomposition (SVD) of a real matrix A ∈ R m×n , m ≥ n, is a factorization A = U ΣV T , where U ∈ R m×m and V ∈ R n×n are orthogonal matrices and Σ = diag(s 1 ,..., s n ) ∈ R m×n . The values s i , i =1,...,n, are called singular values. They may be defined to be nonneg- ative and nonincreasing, see [4]. For computational tools and reliable software, see [4] and [1], respectively. Let A depend smoothly on a parameter t ∈ R, t ∈ [a, b]. The aim is to construct a path of SVD’s A(t)= U (t)Σ(t)V (t) T , (1) where U (t), Σ(t) and V (t) depend smoothly on t ∈ [a, b]. In [2], it is shown that real analytic matrix functions A = A(t) ∈ R m×n on [a, b] is the right class to expect uniqueness of the decomposition. The notion of Analytic Singular Value Decomposition (ASVD) is introduced for the decomposition (1): There exists a factorization (1) that interpolates classical SVD defined at t = a i.e., • the factors U (t), V (t) and Σ(t) are real analytic on [a, b], • for each t ∈ [a, b], both U (t) ∈ R m×m and V (t) ∈ R n×n are orthogonal matrices and Σ(t) = diag(s 1 (t),..., s n (t)) ∈ R m×n is a diagonal matrix • at t = a, the matrices U (a), Σ(a) and V (a) are the factors of the classical SVD of the matrix A(a). Diagonal entries s i (t) ∈ R of Σ(t) are called singular values. Due to the requirement of smoothness, singular values may be negative and also their ordering may by arbitrary. II. RELATED WORK The generic scenario is that the branches t −→ s i (t) of singular values, i =1,...,n, may intersect at isolated points only namely, at the points where s i (t)= s j (t) or s i (t)= −s j (t) for i = j , see [2], p. 8. Therefore, if ASVD Drahoslava Janovsk´ a, Institute of Chemical Technology, Prague, Czech Re- public, email: janovskd@vscht.cz Vladim´ ır Janovsk´ y, Charles University, Prague, Czech Republic, email: janovsky@karlin.mff.cuni.cz Kunio Tanabe, Faculty of Science and Engineering, Waseda University, Tokyo, Japan, email: tanabe.kunio@waseda.jp interpolates classical SVD with positive and different singular values then ASVD is unique. In case that these initial singular values are multiple then the multiplicity of singular values is an invariant of ASVD. In other word, if there are clusters of multiple singular values then dimension of these clusters does not change with t. Nevertheless, even in that case one can define ASVD uniquely, see the [2], the notion of minimum variation path. As far as the computation is concerned, an incremental technique is proposed in [2]: Given a point on the path, one computes a classical SVD for a neighboring parameter value. Next, one computes permutation matrices which link the classical SVD to the next point on the path. The procedure is approximative with a local error of order O(h 2 ), where h is the step size. An alternative technique for computing ASVD is presented in [9] and [10]: A non-autonomous vector field H : R×R N → R N of a huge dimension N = n + n 2 + m 2 can be constructed in such a way that the solution of the initial value problem for the system x ′ = H (t, x) is linked to the path of ASVD. Moreover, [9] contributes to the analysis of non-generic points, see [2], of the ASVD path. These points could be, in fact, interpreted as singularities of the vector field R N . In [8], two methods for computing ASVD are presented and compared. The first one modifies the technique of [2]. The difference is in the treatment of ”clusters” of singular values. To that end, analytic polar decomposition (APD) is introduced. Both ASVD and APD are equivalent. Nevertheless, assuming ”clusters”, the uniqueness of APD path is achieved very naturally (without solving an auxiliary ODE). The second method in [8] consists in solving ODE as in [9] but it uses an implicit integration technique. The comparison clearly prefers the former class of methods: The ODE integration, in spite of using an implicit scheme, lacks the precision. A continuation algorithm for computing ASVD is presented in [5]. It follows a path of a few selected singular values and left/right singular vectors. It is aimed to treat large sparse matrices. The continuation algorithm is of a predictor-corrector type, see [3]. The relevant predictor is based on Euler method hence on an ODE solver. In this respect, there is a link to [9]. Nevertheless, the Newton-type corrector guarantees the solution with a prescribed precision. It defeats the objection of the study [8]. In this paper, we review the above mentioned continuation algorithm and supply details namely, the proof of Theorem 1 in [5]. World Academy of Science, Engineering and Technology 47 2008 135