Journal of ELECTRICAL ENGINEERING, VOL. 53, NO. 9-10, 2002, 261–266 FINITE STATE VECTOR QUANTIZATION OF IMAGE BY NEURAL NETWORKS Rastislav Labovsk´ y * — J´ an Mihal´ ık ** The presented paper deals with finite state vector quantization of an image, where neural networks are applied. The system of finite state vector quantization of an image is based on an idea to select a sub-codebook from the super-codebook, dynamically for each input vector. The result of this is exploiting of the high performance of the super-codebook (low mean square quantization error) but using a low bit rate that is necessary to code the sub-codebook. The super-codebook was designed by a neural network clustering algorithm. We have implemented the selection of the sub-codebook from the super- codebook by a non-linear neural network vector predictor. The vector predictor was realised by three-layer perceptron with a hidden layer, sigmoid and bias units, where its optimization is based on an error back-propagation learning algorithm. We have designed two systems of finite state vector quantizer, the first one with a fixed length of codewords, the second one with a variable length of codewords. Finally we applied the systems on coding of image Lena of size 512 × 512 pels for different bit rates, where we have used one-dimensional and two-dimensional neural network vector prediction of states and a vector quantizer on the basis on neural networks. Keywords: segmentation, vector quantization, vector prediction, neural networks 1 INTRODUCTION The operation of the finite state vector quantizer using neural networks (FSVQNN) is based on an idea to select a small codebook (sub-codebook) from a larger codebook (super-codebook) continuously for each input vector [1]. This is made upon an actual state of FSVQNN, where the actual state is defined from one or more last quantized outputs. The term of state corresponds to any part of vec- tor space in which quantization vectors are situated. So the FSVQNN is able to reduce the bit rate (number of bits needed to code the sub-codebook) and along with this the FSVQNN can exploit the quality of the super-codebook. The structure of FSVQNN consists of two main blocks, as it is seen in Fig. 1. The first one is the vector quantizer based on neural networks. Its super-codebook is known as the Kohonen self-organization feature map and is de- signed by the neural network clustering (NNC) algorithm [6]. The neural network vector quantizer (NNVQ) uses the sub-codebook defined by the actual state to quantize an input vector at present time. The actual state of FSVQNN is determined from one or more previous quantized vectors in the block of vec- tor predictor, which is realized by a non-linear three-layer neural network. The prediction of state is possible under existence of correlation dependence between the neigh- bouring blocks of pels. Practically it means to predict a possible position of the input vector in the vector space from the recent history. super-codebook NN vector quantizer NN vector predictor sub-codebook x i x i ~ Fig. 1. Block scheme of FSVQNN. FSVQNN design consists in designing the neural net- work vector predictor (NNVP) based on a multilayer per- ceptron, in the design of the super-codebook on the Ko- honen neural network basis and at last in designing the selection technique that determines the sub-codebook in- side the super-codebook. We have used two modifications of vector prediction in our experiments. There are one-dimensional and two- dimensional vector prediction in dependence on the used manner of image segmentation to vector sequences. The optimization algorithm of vector predictor of states is the same as the optimization algorithm of the non-linear neu- ral network vector predictor [8], known as error backprop- agation learning algorithm [2], [3]. EuroTel Bratislava, a.s., Prev. VRS Str. a V´ ych. Slovensko, Z´ avodsk´ a cesta 4, 010 01 ˇ Zilina, Slovakia, labovskyr@eurotel.sk ∗∗ Department of Electronics and Multimedia Communications, FEI TU Koˇ sice, Park Komensk´ eho 13, 041 20 Koˇ sice, Slovakia, mihalik@tuke.sk ISSN 1335-3632 c 2002 FEI STU