An Improved Compressive Sensing Reconstruction Algorithm Using Linear/Non-Linear Mapping Xinyu Zhang * , Jiangtao Wen * , Yuxing Han and John Villasenor * Tsinghua University, Beijing, China Email: {xy-zhang06, jtwen}@mails.tsinghua.edu.cn Electrical Engineering Department, University of California, Los Angeles, CA 90095 Email: {ericahan, villa}@ee.ucla.edu Abstract— We describe an improved algorithm for signal reconstruction based on the Orthogonal Matching Pursuit (OMP) algorithm. In contrast with the traditional implementation of OMP in compressive sensing (CS) we introduce a preprocessing step that converts the signal into a distribution that can be more easily reconstructed. This preprocessing introduces negligible additional complexity, but enables a significant performance improvement in the reconstruction accuracy. I. I NTRODUCTION Compressive sensing (CS) refers to a growing body of tech- niques enable accurate recovery of sparsely sampled signal. Foundational contributions to CS include the work of Donoho, Cand` es, Romberg, Tao, and others [1], [2], [3], [4], [5]. The challenge of CS reconstruction, also referred to as the sparse approximation problem, is to solve an underdetermined system of linear equations using sparse priors. The Orthogonal Match- ing Pursuit (OMP) algorithm [6], [7], [8] and l 1 -minimization (also called basis pursuit) [1], [2], [9] are two widely studied CS reconstruction algorithms. Orthogonal Matching Pursuit (OMP) solves the reconstruction problem by identifying the component of the sparse signal in each iteration which is most coincident with the sampling value. Stagewise OMP (StOMP) [10] and Regularized OMP (ROMP) [11] are two variants of the original OMP algorithm. The other commonly explored al- gorithm, l 1 -minimization, replaces the original reconstruction problem with a linear programming problem. It then solves the linear programming problem using well established convex optimization approaches such as the primal-dual interior-point method [12]. l 1 -minimization is generally believed to offer better reconstruction performance than OMP, while OMP has the advantage of simpler implementation and faster running speed [8]. Other reconstruction algorithms include iterative thresholding methods [13], [14] and various Bayesian methods [15], [16]. A more detailed summary of CS reconstruction algorithms is found in [17]. Papers on Bayesian Compressive Sensing [15] and op- timally tuned reconstruction algorithms [18] have studied modeling of sparse signals and the worst amplitude distribution for non-zero components. However, compared with other aspects of compressive sensing, the impact of distribution and its potential usage for improving reconstruction performance is much less well-investigated. In an upcoming publication based on our earlier work in this area [19], through extensive experiments and heuristic analysis, we show that by introduc- ing a preprocessing step D using either linear or non-linear mapping, the relative error of Dx and Dx is smaller than that of x and x , where x is the original signal and x is the reconstructed signal. In other words, one can convert a sparse signal with a distribution that is hard to reconstruct to another one that is easier. In [19], we study the impact of this method for l 1 -minimization and iterative thresholding algorithms. In the present paper, we show that the relative error of x and D 1 (Dx ) is smaller than the relative error of x and x when the OMP algorithm is used. In what follows, we first provide analysis and experimental results about OMP reconstruction performance for sparse signals with different non-zero distributions. We then propose linear and non-linear mapping on top of the traditional OMP algorithm, and show that our improved OMP algorithm can offer better reconstruction performance without increasing the complexity and overhead of sampling and reconstruction. II. ORTHOGONAL MATCHING PURSUIT ALGORITHM A. Mathematical Formulations and Description of OMP Assume a sparse signal x R n with k non-zero elements (called k-sparse), observed via an m × n measurement matrix A with m<n, producing measurement Ax = y R m . Let a i denotes the ith column of A, where i [n] and [n] := {1, 2, ..., n}. Since the measurement y is a linear combination of k columns of A, the reconstruction of x can be recast as the problem of identifying the locations of these k columns. OMP solves this problem with a greedy approach. During each iteration, OMP selects the column of A which is mostly correlated with the residual of measurement y, and then it removes the contribution of this column to compute a new residual. Table I contains a description of OMP. B. The Approximation Error Bounds of OMP Most research regarding OMP relies on the coherence statistic of a matrix A. The coherence statistic measures the correlation between different columns of A using the absolute value of inner product: µ max i=j |〈a i ,a j 〉|. (1) OMP generally requires µ to be small in order to avoid the relatively small non-zero components being masked by large