IMPROVEMENT OF HIDDEN MARKOV MODEL EVALUATION OF THE MOBILE SATELLITE CHANNEL BY RESORTING TO A TRANSITION LOCALISATION METHOD C. Alasseur, L. Husson SUPELEC, Service RadioØlectricitØ et Electronique, 3 rue Joliot Curie, Plateau de Moulon, 91 192 Gif-sur-Yvette, France phone: +33 1 69851454, fax: +33 1 69851469, email: clemence.alasseur@supelec.fr, lionel.husson@supelec.fr ABSTRACT The mobile satellite channel has underlying Markovian prop- erties and can then be represented by a Hidden Markov model (HMM). A challenging problem consists in estimating the model parameters from experimental data, especially when these parameters are not easily identifiable. In these cases, classification methods like k-means or scalable clus- tering, which are considered in this paper, show poor results when applied to the channel signal directly. We show that the detection of change-points of the signal, i.e. the detection of transitions between the model states, in a preliminary step, improves the estimation of the model parameters. We thus propose a method of model estimation including the detec- tion of change-points that enables a better modelling of the satellite channel. 1. INTRODUCTION Because of mobility of the receptor and/or of the satellite constellation that results in large variations of transmission conditions, the modelling of the satellite channel is a diffi- cult issue. No a priori model, for example as [1] can effi- ciently model the channel in every context and there is a need for a method that modelled the channel without making priors so it can produce a model in any situation. Markovian models are characterized by their number of states, their corresponding distributions and the transition probabilities between states. At each time, the signal is considered to be the realization of one particular state. In this paper, we pro- pose a method that enables to evaluate the model of the sat- ellite channel signal when no knowledge about the signal is available before the beginning of the estimation process except the number of states of the model. Because one observation of the satellite channel does not generally reveal the state the model currently is, we need to consider the signal by blocks of measures which we refer as the analysis window. One histogram is evaluated on every analysis window and linked to one state distribution of the HMM. The method used to estimate the HMM is the follow- ing: histograms corresponding to analysis windows are pro- jected into a reduced subspace and their projections are then classified [2, 3]. Therefore, the parameters of the HMM can easily be estimated because the projection step has reduced the classification complexity by limiting the number of di- mensions. The main classification difficulty is to find the correct length of the analysis window through which the signal is considered. In this paper, two classification meth- ods are compared for which the size of the analysis window is either fixed or variable. When the length of the analysis window is variable, it means that the sampling of the chan- nel signal follows the transition between states of the signal otherwise the channel signal is sampled every fixed analysis window length. The variable analysis window lengths are obtained by rupture between HMM states detection with a Monte Carlo Markov Chain (MCMC) method. We resort to two classification methods that are detailed in the second part of this paper. The third part presents the rup- ture estimation method. The combination of the classifica- tion with the transition estimation for the HMM estimation is explained in part 4, followed by the experimental results. 2. CLASSIFICATION METHODS The classification methods operate onto projections x of the histograms calculated over analysis windows of the signal. The projections of one particular state of the model are ob- served to cluster around a centre position y. That are these clusters that classification methods aim to identify. 2.1 K-means with the Mahalanobis distance In the k-means method, the centres of the clusters are evaluated by successive iterations and the projected points are classified to the nearest cluster in term of Euclidean distance between them and the centres of the clusters. After several iterations, this preliminary rough classification con- verges and gives quite accurate centre m i and covariance matrix C i for each distribution. The projected points x are then once again classified to the nearest cluster but this time in term of the Mahalanobis distance D i defined as follows: . One advantage of the Maha- lanobis distance is to take into account the correlation of the projected points and makes it possible to have curved deci- sion boundaries (unlike linear boundaries for the Euclidean distance). ) m x ( C ) m x ( D i 1 i ’ i 2 i − − = − 2.2 Scalable Classification [4] Considering a single cluster C of data, the contribution of each datum x to the centre y of the cluster, denoted by Py(x) is expressed by: 453