Sparse and robust signal reconstruction algorithm Sandra V. B. Jardim Information Technology Department, Polytechnic Institute of Tomar, 2300-313 Tomar, Portugal ABSTRACT: Many problems in signal processing and statistical inference are based on finding a sparse so- lution to an undetermined linear system. The reference approach to this problem of finding sparse signal repre- sentations, on overcomplete dictionaries leads to convex unconstrained optimization problems, with a quadratic term ℓ 2 , for the adjustment to the observed signal and a coefficient vector ℓ 1 -norm. This work focus algorithms development and experimental analysis for the solution of ℓ q -ℓ p optimization problems, where p, q ∈]0, 2], of which ℓ 2 -ℓ 1 is an instance. The ℓ q -norm, with q< 2, in the data term, gives statistical robustness to the approxi- mation criterion. The developed algorithms belongs to the majorization-minimization class, where the solution of the problem is given by the minimization of a progression of majorizers of the original function. Each iteration corresponds to the solution of an ℓ 2 -ℓ 1 problem. These are reformulated as quadratic programming problems and solved by the projected gradient algorithm. When tested on synthetic data and image reconstruction prob- lems, the results of implemented algorithms shows a good performance both in compressed sensing and signal restoration scenarios. 1 INTRODUCTION In general, sparse approximation problems have been of great interest given its wide applicability both in signal and image processing field as in statistical in- ference contexts, where many of the problems to be solved involve the undetermined linear systems sparse solutions determination. The literature on sparsity op- timization is rapidly increasing (see [1, 2] and refer- ences therein). More recently sparsity techniques are also receiving increased attention in the optimal con- trol community [3, 4]. Given an input signal y, sparse approximation problems resolution aims an approxi- mated signal determination x through a linear com- bination of elementary signals, which are, for sev- eral current applications, extracted from a set of sig- nals not necessarily lineary independent. A preference for sparse linear combinations is imposed by penaliz- ing nonzero coefficients. The most common penalty is the number of elementary signals that participate in the approximation. On account of the combina- torial nature of the sparse approximation problems, which is due to the presence of the quasi-norm ℓ 0 of the coefficients vector to be estimated, these problems have a difficult computational resolution. In general, these optimization problems are NP-hard problems [5]. Among the different approaches used to over- come this difficulty, convex relaxation is the most widely used, which shall replace the quasi-norm ℓ 0 by a related convex function. Given a specific model formulation and a data set, the ℓ 1 norm is a selec- tive convex function, which assigns a high number of coefficients to zero, ensuring even a single optimal solution. The ℓ 1 norm has as key properties shrink- age capacity and convexity, and can be seen as the most selective shrinkage function, which is also con- vex. The selectivity tends to produce sparse models since many of the coefficients are set to zero, while the convexity ensures the possibility of determining a global minimum solution for a given data set. Thus, at the convex relaxation of sparse optimization prob- lems, the quasi-norm ℓ 0 is replaced by the ℓ 1 norm, which minimization can, under certain and sufficient conditions, efficiently recover every s-sparse vector x ∈ℜ n from the measurement vector y = φx ∈ℜ k [6, 7]. One of the most common application of opti- mization problems involving the ℓ 1 norm is the deter- mination of sparse representations on overcomplete dictionaries. In these cases the used approach leads to convex unconstrained optimization problems, involv- ing a quadratic term ℓ 2 of adjustment to the observed signal, and a ℓ 1 norm of the coefficients vector to be estimated. This type of problem is usually termed ℓ 2 - ℓ 1 (1), which has several applications, among which are the Least Absolute Shrinkage and Selection Op- erator (LASSO) [8] and the Basis Pursuit Denoising criterion (BPDN) [9]. In signal processing area, com- pressed sensing is another important application of sparse approximation problems, aiming to capture a signal from a number of measurements as low as pos-