IVO R. DRAGANOV, ANTOANETA A. POPOVA, NIKOLAY N. NESHOV Radiocommunications and Videotechnologies Department Technical University ’ Sofia 8 Kliment Ohridski Blvd., 1000 Sofia BULGARIA idraganov@tu’sofia.bg, antoaneta.popova@tu’sofia.bg, nicknesh@abv.bg Abstract: In this paper an analysis is presented concerning the asymptotic state of the one’dimensional self’organizing map (SOM) with finite grid in the case of normal point distribution input passed through non’linear channel at first. The SOM distortion measure is analyzed with its value found approximately. The results obtained are considered useful enough in wide variety of practical cases where fine tuning of the SOM is needed. KeyWords: Self’Organizing Map, Normal Point Distribution, Non’Linear Channel, Distortion Measure It is well known fact that the area allocated for storing the most important feature set inside a self’organizing map (SOM) is proportional to the frequency of occurrence of that very same feature in the observations [1]. As the SOM structure tends to become very complex in the most of its real case applications often the magnification factor is used to describe heaping of feature vectors. It is simply the inverse of the point density around each neuron representing a cluster. So far an investigation of the point density for the linear map is led in the presence of a very large number of codebook vectors over a finite area [2], [3]. It is revealed that the asymptotic point density is proportional to the probability of a certain feature vector occurring raised to some exponent depending of the number of neighbors including the winning neuron and some scalar factor. In any case the initial neighbor function width may vary largely during the training process starting with huge values and ending with zero’order topology case – no neighbors except the winner are present. This boundary case is undesired since the learning process no longer maintains the order of the codebook vectors. The approximation accuracy of the probability of occurring for a feature and the minimum stability of ordering demanding more neighbor interactions are the two aspects to be balanced. If we have no neighbors around the winner a simple scalar quantization case occurs. Then the power of the asymptotic function for the point density decreases, according to [3] bellow 1/3. Getting this power to higher values incrementally by trial and error approach seems a good solution but the following tendencies should be considered. If we try to approach the Bayesian classifier, i.e. to find the optimal classification border and the density functions of adjacent clusters are close to each other the latter could be replaced with any other pair of monotonic functions of densities. In such a case the practical SOM application is adaptable to simplification. The other important property is that when feature dimensionality is increased in the order of hundreds of components per vector the power is close to 1 [3]. Similar research on the change of this power is done in [4] when the neighbor function is Gaussian kernel and its normalized second moment is independent variable. The resulting range for the power value in this case is from 1/3 to 2/3. Analogous results are presented in [5]. One recent research [6] investigates the influence of the normal point density input over the asymptotic state of a finite one’dimensional SOM and its distortion measure. In [7] a typical practical challenge is given concerning the location of the representative vectors of 256’QAM system operating in the presence of additive white Gaussian noise (AWGN) and third order nonlinearity which can be solved by applying the approach presented here. In part 2 such analysis is presented and in part 3 some computational results are given. In part 4 a conclusion is made. ! " Let one’dimensional feature space of x is considered. For our analysis to be correct the following assumptions should be granted: the number of points (feature vectors) must be large enough (e.g. by criteria given in [1]) and they must be stochastic variables so their differential probability for each cluster they fall into, i.e. the LATEST TRENDS on COMPUTERS (Volume II) ISSN: 1792-4251 761 ISBN: 978-960-474-213-4