IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 1, JANUARY 2001 457
Recursive Consistent Estimation with Bounded Noise
Sundeep Rangan, Member, IEEE, and Vivek K Goyal, Member, IEEE
Abstract—Estimation problems with bounded, uniformly distributed
noise arise naturally in reconstruction problems from over complete linear
expansions with subtractive dithered quantization. We present a simple
recursive algorithm for such bounded-noise estimation problems. The
mean-square error (MSE) of the algorithm is “almost” , where
is the number of samples. This rate is faster than the MSE
obtained by standard recursive least squares estimation and is optimal to
within a constant factor.
Index Terms—Consistent reconstruction, dithered quantization, frames,
overcomplete representations, overdetermined linear equations.
I. INTRODUCTION
It is common to analyze systems including quantizers by modeling
each quantizer as a source of signal-independent additive white noise.
This model is precisely correct only when one uses subtractive dithered
quantization, but for simplicity it is often assumed to hold for coarse,
undithered quantization [1]–[3]. What can easily be lost in using this
model is that the distribution of the quantization noise can be important,
especially its boundedness.
This correspondence focuses on solving an overdetermined linear
system of equations from quantized data. Assuming subtractive dither,
this can be abstracted as the estimation of an unknown vector
from measurements
(1)
where each is a known vector and the ’s are independent and
identically distributed (i.i.d.) random variables distributed uniformly
on .
1
The maximum noise magnitude is half of the
quantization step size and is known a priori. Estimation problems of
this form may arise elsewhere as well. At issue are the quality of re-
construction that is possible and the efficient computation of good es-
timates.
The classical method for estimating the unknown vector is least
squares estimation, which attempts to find such that the -norm of
the residual sequence is minimized [4], [5]. Least squares es-
timators have been extensively studied and admit efficient implementa-
tions. Under mild assumptions, least squares estimates are guaranteed
to converge to the true value as the number of samples grows to infinity.
However, least squares estimation may produce an estimate which
not only differs from the maximum-likelihood (ML) and minimum
mean-squared error estimates, but also is inconsistent with the bounds
on the quantization noise. With the bound on , each sample in (1)
places certain hard constraints on the location of the unknown vector
Manuscript received July 28, 1998; revised June 22, 2000. This work was
initiated at the University of California, Berkeley
S. Rangan is with Flarion Technologies, Bedminster, NJ 07921 USA (e-mail:
rangan@flarion.com).
V. K Goyal is with Mathematics of Communications Research, Bell Labs,
Lucent Technologies, Murray Hill, NJ 07974 USA (e-mail: v.goyal@ieee.org).
Communicated by J. A. O’Sullivan, Associate Editor for Detection and Esti-
mation.
Publisher Item Identifier S 0018-9448(01)00470-9.
1
All vectors are real column vectors. For a vector , denotes its transpose
and denotes its Euclidean norm. Expectation and probability are denoted
with and , respectively.
. Least squares estimates are not in general consistent with these con-
straints. Since the constraints are convex, least squares estimates can
be improved by projecting onto a set of estimates that are consistent.
Recently, it has been suggested that this improvement can result
in faster order of convergence [6]–[9]. Numerical tests showed that,
after applying consistency constraints, estimates can attain an
mean-squared error (MSE). Classical least squares estimation, which
does not, in general, satisfy the hard constraints, attains only an
MSE.
The behavior and implementation of consistent estimation methods
are not fully understood. While the MSE for consistent esti-
mation has been observed in a number of simulations, the decay rate
has only been proven for certain sets . The most general condi-
tions under which MSE is provably attainable are not cur-
rently known.
In addition, consistent estimation is difficult to implement recur-
sively. Given data points, finding a consistent estimate requires the
solution of a linear program with variables and constraints. No
recursive implementation of this computation is presently known. The
linear program must be recomputed with each new observation and the
size of the problem grows to infinity.
This correspondence introduces a simple, recursively implementable
estimator with a provable MSE. The proposed estimator is
similar to the consistent estimation method of [7], [9], except that the
estimates are only guaranteed to be consistent with the most recent data
point. The estimator can be realized with an extremely simple update
rule which avoids any linear programming. Our main results show that,
under suitable assumptions on the vectors , the simple estimator “al-
most” achieves the conjectured MSE.
We will also show that under mild conditions on the a priori prob-
ability density of , the MSE decay rate of any reconstruction algo-
rithm is bounded below by . Thus the proposed estimator is
optimal to within a constant factor. An lower bound has also
been shown in [10] under weaker assumptions that do not require uni-
formly distributed white noise. However, with the uniformly distributed
white-noise model considered here, we will be able to derive a simple
expression for the constant in this lower bound.
A. Summary of Contribution
As noted above, MSE results have already appeared in the
literature. This work has two distinguishing features: First,
MSE is obtained with an extremely simple algorithm that works re-
cursively, i.e., uses each observation only once, with no increase in
memory usage with time. Second, the requirement on the set of mea-
surement “directions” is very mild (see Theorem 2). Until re-
cently, the only published MSE upper bounds for finite-di-
mensional signal spaces were derived from the analogous result for
oversampled analog-to-digital (A/D) conversion of periodic band-lim-
ited signals [6], [7]. Thus, they were applicable to a particular family
of sets known as Fourier frames [9]. A new approach reported
in [11]—not based on consistency—attains MSE more gen-
erally when the ’s are uniform samples from a closed curve in ;
still, Theorem 2 given here is more general.
The previous paragraph requires a note of moderation because the
estimation problem in this correspondence differs somewhat from
those in [6]–[11]. These previous works used measurements from an
(undithered) uniform quantizer . The bounds are for
the squared error in estimating a fixed vector while increasing the
number of measurements ; constant factors in the bounds depend on
. Furthermore, when each has equal norm—as assumed in these
works—signal vectors within a small ball centered at the origin
0018–9448/01$10.00 © 2001 IEEE