New algorithm for polynomial plus-minus factorization based on band structured matrix decomposition Martin Hromˇ ık, Michael ˇ Sebek Centre for Applied Cybernetics Faculty of Electrical Engineering Czech Technical University in Prague Prague, Czech Republic fax: +420-2-2435 7681 e-mail: m.hromcik@c-a-k.cz Abstract— A new algorithm for the plus/minus factorization of a scalar discrete-time polynomial is presented in this report. The method is based on the relationship of polynomial algebra to the algebra of band structured infinite dimensional matrices. Employing standard numerical routines for factorizations of constant matrices brings computational efficiency and reliability. Performance of the proposed algorithm is demonstrated by a practical application. Namely the problem of computing an l1- optimal output feedback dynamic compensator to a discrete time SISO plant is considered as it is studied by Hurak et al. in [6]. Involved plus-minus factorization is resolved by our new method. I. I NTRODUCTION This paper describes a new method for the plus-minus fac- torization of a discrete-time polynomial. Given a polynomial in the z variable, p(z)= p 0 + p 1 z + p 2 z 2 + ··· + p n z n , without any roots on the unit circle, its plus/minus factorization is defined as p(z)= p + (z)p - (z) (1) where p + (z) has all roots inside and p - (z) outside the unit disc. Clearly, the scalar plus/minus factorization is unique up to a scaling factor. Polynomial plus/minus factorization has many applications in control and signal processing problems. For instance, ef- ficient algebraic design methods for time-optimal controllers [1], quadratically optimal filters for mobile phones [15], [16], and l 1 optimal regulators [6], to name just a few, all recall the +/- factorization as a crucial computational step. II. EXISTING METHODS From the computational point of view, nevertheless, the task is not well treated. There are two quite natural methods. One of them is based on direct computation of roots. Using standard methods for polynomial roots evaluation, see [8], [17] for instance, one can separate the stable and unstable roots of p(s) directly and construct the plus and minus parts from related first order factors or, alternatively, employ a more efficient recursive procedure based on the matrix eigenvalue theory [17]. Alternative algorithm relies on polynomial spectral factor- ization and gratest polynomial divisor computation. If q(z) is the spectral factor of the symmetric product p(z)p(z -1 ) then the greatest common divisor of p(z) and q(z) is obviously the plus factor of p(z). The minus factor can be derived similarly from p(z -1 ) and q(z -1 ). As opposed to the previous approach based on direct roots computation which typically makes problems for higher degrees and/or roots multiplicities, this procedure relies on numerically reliable algorithms for polynomial spectral factorization [13], [5]. Unfortunately, the polynomial greatest common divisor computation is much more sensitive. As a result, both these techniques do not work properly for high degrees (say over 50). Quite recently, a new approach to the problem was sug- gested by the authors of this report in [14]. The method is inspired by an efficient algorithm for polynomial spectral factorization, see [5]. It provides both a fruitful view on the relation between DFT and the Z -transform theory, and a powerful computational tool in the form of the fast Fourier transform algorithm. Success of adapting a powerful spectral factorization algo- rithm for the plus-minus factorization was inspiring for us. We decided to undertake a similar way with another spectral factorization procedure, namely the Bauer’s method, which is described in the following sections. III. BAUERS METHOD FOR POLYNOMIAL SPECTRAL FACTORIZATION F. I. Bauer published his method for spectral factorization of a discrete-time scalar polynomial in 1955, see [2], [3]. The procedure is based on the relationship between polynomials and related infinite Toeplitz-type Sylvester matrices.