k-bit Hamming Compressed Sensing Tianyi Zhou Centre for Quantum Computation & Intelligent Systems FEIT, University of Technology Sydney, Australia Email: tianyi.david.zhou@gmail.com Dacheng Tao Centre for Quantum Computation & Intelligent Systems FEIT, University of Technology Sydney, Australia Email: dacheng.tao@uts.edu.au Abstract—We consider recovering d-level quantization of a signal from k-level quantization of linear measurements. This problem has great potential in practical systems, but has not been fully addressed in compressed sensing (CS). We tackle it by proposing k-bit Hamming compressed sensing (HCS). It reduces the decoding to a series of hypothesis tests of the bin where the signal lies in. Each test equals to an independent nearest neighbor search for a histogram estimated from quantized measurements. This method is based on that the distribution of the ratio between two random projections is defined by their intersection angle. Compared to CS and 1-bit CS, k-bit HCS leads to lower cost in both hardware and computation. It admits a trade-off between recovery/measurement resolution and measurement amount and thus is more flexible than 1-bit HCS. A rigorous analysis shows its error bound. Extensive empirical study further justifies its appealing accuracy, robustness and efficiency. I. I NTRODUCTION Recently, prosperous works in compressed sensing (CS) [1][2] show that an accurate recovery can be achieved by sampling signal at a rate proportional to its underlying “infor- mation content” rather than bandwidth. The key improvement of CS is that the sampling rate can be significantly reduced below Nyquist rate by replacing the uniform sampling with lin- ear measurement, when the signals are sparse or compressible on certain dictionary. In particular, CS dedicates to rebuild a sparse signal x ∈ R n from its linear measurements by solving an underdetermined system. min x∈R n ‖x‖ p s.t. y =Φx, (0 ≤ p< 2) where Φ ∈ R m×n is the sensing matrix allowing m ≪ n and fulfilling the restricted isometry property (RIP) or incoherence condition, and ℓ p norm encourages sparsity. It has been proved that a number of random matrix ensembles satisfy RIP well. However, in practical digital systems the measurements have to be discretized to a finite number of bits by quantization, which merely gives an interval each measurement lies in. If we use the centroid of the given interval to approximate the linear measurement in traditional CS methods, the distortion caused by quantization can be ignored only when the quanti- zation is of high-resolution. But this requires expensive ADCs. Thus several recent studies develop CS methods treating each measurement as an uncertain value distributed within the given interval. They observe by this mean CS can be successfully done even with very coarse quantized measurements. min x∈R n ‖x‖ p s.t. u ≤ Φx ≤ v, (0 ≤ p< 2) An extreme case is y = sign(Φx). 1-bit CS [3][4][5] ensures consistent reconstructions of signals on the unit ℓ 2 sphere [6]. The 1-bit measurements lead to a low-cost and robust hardware implementation. Nevertheless, the signal rebuilt by 1-bit CS and other quan- tized compressed sensing methods [7][8] is still continuous, so it has to be discretized when stored or transmitted in practical systems. Although additional ADCs can be employed for this task, they require extra cost. Moreover, time consuming iterative optimization in CS and 1-bit CS limits their efficiency. Furthermore, system could be more flexible in accuracy- efficiency trade-off if we have the freedom to adjust the number of bits for the measurement and recovery. Thus recent 1-bit HCS [9] considers to directly recover d-level quantization of signal rather than signal itself. In 1-bit HCS, the 1-bit measurements generate i.i.d. samples of a random variable, whose distribution’s nearest neighbor among certain reference distributions indicates the quantized signal. Since quantization is irreversible with information loss, 1-bit HCS weakly relies on the sparse assumption. This paper expands 1-bit HCS to k-bit measurement case on both algorithm and theory. Different from 1-bit HCS bridging the signal and 1-bit measurements [4] via distribution of sign for the product of two random projections, we investigate the distribution of the ratio between two random projections. It is a Cauchy distribution uniquely parameterized by the signal’s one dimension, whilst its histogram can be estimated from k-level quantization of linear measurements. Interestingly, the Bernoulli distribution in 1-bit HCS is a special case of 2- bin histogram in k-bit HCS. In recovery, for each dimension of the signal, k-bit HCS searches the nearest neighbor of the estimated histogram among d predefined reference histograms, which corresponds to the d bins for signal quantization. This can be seen as a hypothesis test theoretically supported by concentration inequality for random functions. Compared to 1- bit HCS, the signal quantization in k-bit HCS is more flexible because the d bins can be chosen independently with the d reference histograms. The primary contributions of k-bit HCS are 1) its direct recovery of quantized signal from quantized measurements largely saves the hardware cost in practical system; 2) its recovery of each dimension is an independent nearest neighbor search among d histograms, and thus is considerably more efficient than CS and 1-bit CS. Moreover, it is direct to further accelerate it by parallel computing and fast nearest neighbor