NON-NEGATIVE MATRIX FACTORISATION OF COMPRESSIVELY SAMPLED NON-NEGATIVE SIGNALS Paul D. O’Grady Scott T. Rickard Complex & Adaptive Systems Laboratory, University College Dublin, Belfield, Dublin 4, Ireland. ABSTRACT The new emerging theory of Compressive Sampling has demon- strated that by exploiting the structure of a signal, it is possible to sample a signal below the Nyquist rate and achieve perfect reconstruction. In this short note, we employ Non-negative Matrix Factori- sation in the context of Compressive Sampling and propose two NMF algorithms for signal recovery—one of which utilises It- eratively Reweighted Least Squares. The algorithms are ap- plied to compressively sampled non-negative data, where a sparse non-negative basis and corresponding non-negative coefficients for the original uncompressed data are discovered directly in the compressively sampled domain. 1. INTRODUCTION The Nyquist-Shannon sampling theorem states that in order for a continuous-time signal to be represented without error from its samples, the signal must be sampled at a rate that is at least twice its bandwidth. In practice, signals are often compressed soon after sampling, trading off perfect recovery for some ac- ceptable level of error. Clearly, this is a waste of valuable sam- pling resources. In recent years, a new and exciting theory of Compressive Sampling (CS) [1, 2] (also known as compressed sensing among other related terms) has emerged, in which a signal is sampled and compressed simultaneously using sparse representations at a greatly reduced sampling rate. The central idea being that the number of samples needed to recover a sig- nal perfectly depends on the structural content of the signal—as captured by a sparse representation that parsimoniously repre- sents the signal—rather than its bandwidth. More formally, CS is concerned with the solution, x R N , of an under-determined systems of linear equations of the form ΦAx = Φy, where the sensing matrix Φ R M×N has fewer rows than columns, i.e., M<N . Critical to the theory of CS is the assumption that the solution x is sparse, i.e., y has a par- simonious representation in a known fixed basis A R N×N . The most natural norm constraint for this assumption is the 0 (pseudo-)norm, as it indicates the number of nonzero coeffi- cients. However, minimisation of the 0 norm is a non-convex optimisation, which is NP-complete and cannot be computed in This material is based upon works supported by the Science Foundation Ireland under Grant No. 05/YI2/I677. polynomial time. For these reasons the 1 norm is usually spec- ified, as it is computationally tractable and also recovers sparse solutions, min xR N x 1 , subject to ΦAx = Φy, (1) where the recovered signal, x, is such a solution. In order to specify the minimal number of measurements, M , required to achieve perfect recovery, Φ needs to be maxi- mally incoherent with A i.e., have a non-parsimonious repre- sentation in that basis—a notion which is contrary to sparse- ness. Typically, the entries of Φ are drawn from a random Gaussian distribution, as it is universally incoherent with sparse transformations, and performs exact recovery with the minimal number of measurements with high probability. Furthermore, Cand` es and Tao [3] present an important result that gives a lower bound on M that reliably achieves perfect recovery for a K-sparse signal (x 0 = K): M CK log(N ), where C depends on the desired probability of success, which tends to one as N →∞. In this short note, we outline two Non-negative Matrix Fac- torisation algorithms that discover factors for uncompressed non- negative data in the compressively sampled domain. The first algorithm utilises Iteratively Reweighted Least Squares as an approximation to the 1 -norm objective, while the second al- gorithm is a modification of the standard least squares NMF algorithm of Lee and Seung [4]. This note is organised as follows: We discuss Iteratively Reweighted Least Squares in Section 2 and Non-negative Ma- trix Factorisation in Section 3. We overview Nonnegative Under- determined Iteratively Reweighted Least Squares and demon- strate signal recovery in in Section 4. Finally, we propose two algorithms for Non-negative Matrix Factorisation in the CS do- main, and present an image recovery example in Section 5; fol- lowed by a conclusion in Section 6. 2. ITERATIVELYREWEIGHTED LEAST SQUARES We desire the minimum 1 -norm solution for systems of linear equations, and require an objective function that recovers such solutions. However, the 1 -norm objective has a discontinuity at the origin, and is therefore non-differentiable and cannot be minimised using standard gradient methods. Typically, the 1 - norm objective is approximated by Iteratively Reweighted Least