MAXIMUM A-POSTERIORI ESTIMATION IN LINEAR MODELS WITH A RANDOM GAUSSIAN MODEL MATRIX: A BAYESIAN-EM APPROACH Ido Nevat * , Gareth W. Peters ∔ and Jinhong Yuan * School of Electrical Engineering and Telecommunications, University of NSW, Australia * School of Mathematics and Statistics, University of NSW, Australia ∔ email: ido@student.unsw.edu.au, peterga@maths.unsw.edu.au, J.Yuan@unsw.edu.au ABSTRACT This paper considers the problem of Bayesian estimation of a Gaus- sian vector in a linear model with random Gaussian uncertainty in the mixing matrix. The maximum a-posteriori estimator is derived for this model using the Bayesian Expectation-Maximization. It is demonstrated that the solution forms an elegant and simple iteration which can be easily implemented. Finally, the estimator developed is considered in the context of near-Gaussian-digitally modulated sig- nals under channel uncertainty, where it is shown that the MAP esti- mator outperforms the standard linear MMSE estimator in terms of mean square error (MSE) and bit error rate (BER). Index Terms— MAP estimation, Bayesian EM 1. INTRODUCTION A generic problem in many different fields is the estimation of a random Gaussian vector x in the linear model y = Gx + w, (1) where G is a linear transformation matrix and w is a Gaussian noise vector. Three standard methods for estimating x in this Bayesian framework are the minimum mean square error (MMSE), the lin- ear minimum mean squared error (LMMSE) and the maximum a- posteriori (MAP) estimators. The first two approaches are based on a quadratic cost function whereas the third minimizes a hit-or-miss risk function. From a detection point of view, the MAP method is also related to the minimum error probability criterion. Most of the literature concentrates on the simplest case, in which it is assumed that the model matrix G is completely deterministic and specified. In this setting, the MMSE, LMMSE and MAP estima- tors coincide and have a simple closed form solution. The novelty of this paper lies in the specification of the transformation matrix, where we remove the assumption, made in much of the literature that G is known deterministically. Instead we treat G as a random matrix and assume weak statistical properties of this matrix, namely that its elements are i.i.d Gaussian distributed with known second order statistics. A typical scenario in which G is random is estima- tion under uncertainty conditions. For example, in communication systems this setting is appropriate when only partial channel state in- formation is available. In this case, the MMSE, LMMSE and MAP approaches lead to different estimators. In fact, we will show that the solution of the MMSE leads to an intractable integration, whereas the MAP estimator can be efficiently found. A possible application is digital communication systems employing near-Gaussian constel- lation sets. It is well known that in order to achieve capacity in lin- ear Gaussian channels, powerful coding schemes must be combined with shaping methods which result in near-Gaussian symbols [1, 2]. Two practical schemes that obtain shaping [3] and “shell mapping” [4]. In [5], this problem was tackled and the MAP solution was de- rived by transformation of the problem from a multi dimensional into one dimensional optimization program. In this process the the objective function becomes convex and this can be exploited in the solution technique. The drawback of this proposed method lies in the fact that to perform this technique, one must determine the eigen val- ues of potentially large rank matrix, this can lead to computational issues related to scaling the complexity of the system under consid- eration due to the curse of dimensionality. The proposed technique in this paper bypasses these issues and so scales more effectively with system complexity. Using the BEM procedure, we derive the solution which forms an iterative procedure that can be easily imple- mented and does not involve any matrix inversion. This paper is organized as follows: In Section 2, we formulate the problem and introduce the MMSE, LMMSE and MAP estima- tors. In Section 3 we provide a short review of the Bayesian EM algorithm. Section 4 ??? the MAP estimator utilizing the BEM ap- proach. Section 5 summerises non-uniform constellations and near- optimal detection. Simulation results in new setting are offered in 6. 2. PROBLEM FORMULATION Consider the problem of estimating a random vector x in the linear model y = Gx + w, (2) where G is an N × K Gaussian matrix with known mean H and variance σ 2 g > 0, x is a zero-mean Gaussian vector with indepen- dent elements of variance σ 2 x > 0 and w is a zero-mean Gaussian vector with independent elements of variance σ 2 w > 0. In addition, x, G and w are statistically independent. It is desired to find an es- timator x (y) which is a function of the observation vector y and the given statistics, that is optimal in some sense. Under the Bayesian framework, a typical procedure for selecting x (y) is to define a nonnegative cost function C (x, x (y)) and to minimize its expected value [6]. The most common objective function is the quadratic error which is defined as (See Fig. 1) C (x, x (y)) = ‖x − x (y)‖ 2 . (3)