SIAM J. SCI. COMPUT. c 20XX Society for Industrial and Applied Mathematics Vol. 0, No. 0, pp. 000–000 THE ITERATIVE SOLVER RISOLV WITH APPLICATION TO THE EXTERIOR HELMHOLTZ PROBLEM ∗ HILLEL TAL-EZER † AND E. TURKEL ‡ Abstract. The innermost computational kernel of many large-scale scientific applications is often a large set of linear equations of the form Ax = b which typically consumes a significant portion of the overall computational time required by the simulation. The traditional approach for solving this problem is to use direct methods. This approach is often preferred in industry because direct solvers are robust and effective for moderate size problems. However, direct methods can consume a huge amount of memory, and CPU time, in large-scale cases. In these cases, iterative techniques are the only viable alternative. Unfortunately, iterative methods lack the robustness of direct methods. The situation is especially difficult when the matrix is nonsymmetric. A lot of research has been devoted to trying to develop a robust iterative algorithm for nonsymmetric systems. The present paper describes a new robust and efficient algorithm aimed at solving iteratively nonsymmetric linear systems. It is based on looking for an approximation to the “optimal” polynomial Pm(z) which satisfies ||Pm(z)||∞ = min Q∈Πm ||Q(z)||∞, z ∈ D, where Πm is the set of all polynomials of degree m which satisfies Qm(0) = 1 and D is a domain in the complex plane which includes all the eigenvalues of A. The resulting algorithm is an efficient one, especially in the case where we have a set of linear systems which share the same matrix A. We present several applications, including the exterior Helmholtz problem, which leads to a large indefinite, nonsymmetric, and complex system. Key words. Risolv, Krylov, Helmholtz exterior, preconditioning AMS subject classifications. 15A15, 15A09, 15A23 DOI. 10.1137/08072454X 1. Introduction. We address the problem of solving a set of linear equations (1.1) Ax = b, where A is a large, general N × N matrix and b is an N × 1 vector. In real-life problems, preconditioning is mandatory. In this paper we do not address this issue. When A is a symmetric, positive definite matrix, conjugate gradient [13] is shown to be the optimal algorithm. Similarly, there are optimal algorithms for the indefi- nite, symmetric case (e.g., MINRES, SYMMLQ [23]). The situation is significantly more complicated in the general, nonsymmetric case. A popular algorithm for this type of problem is the well-known GMRES algorithm [25], which is optimal in the following sense. Let x m-1 be the solution vector after applying m - 1 matrix-vector multiplications: (1.2) x m-1 = x 0 + Q m-1 (A)r 0 , where x 0 , r 0 are the initial guess and residual, respectively. Hence, (1.3) r m-1 = b - Ax m-1 = b - A(x 0 + Q m-1 r 0 )=(I - AQ m-1 (A)) r 0 = P m (A)r 0 , ∗ Received by the editors May 19, 2008; accepted for publication (in revised form) August 5, 2009; published electronically DATE. http://www.siam.org/journals/sisc/x-x/72454.html † School of Computer Science, Academic College of Tel-Aviv Yaffo, Tel Aviv 64044, Israel (hillel@ mta.ac.il). ‡ School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel (turkel@post.tau. ac.il). 1