-1- MASSIVELY PARALLEL PROCESSING APPROACH TO FRACTAL IMAGE COMPRESSION WITH NEAR-OPTIMAL COEFFICIENT QUANTIZATION Paolo Palazzari ENEA - HPCN Project - C.R.Casaccia Via Anguillarese, 301 - 00060 S.Maria di Galeria (Rome) E-mail palazzari@casaccia.enea.it Moreno Coli, Guglielmo Lulli University “La Sapienza”, Electronic Engineering Department, Via Eudossiana, 18 - 00184 Rome (Italy) E-mail coli@die.ing.uniroma1.it ABSTRACT In recent years Image Fractal Compression techniques (IFS) have gained more interest because of their capability to achieve high compression ratios while maintaining very good quality of the reconstructed image. The main drawback of such techniques is the very high computing time needed to determine the compressed code. In this work, after a brief description of IFS theory, we introduce the coefficient quantization problem, presenting two algorithms for its solution: the first one is based on Simulated Annealing while the second refers to a fast iterative algorithm. We discuss IFS parallel implementation at different level of granularity and we show that Massively Parallel Processing on SIMD machines is the best way to use all the large granularity parallelism offered by the problem. The results we present are achieved implementing the proposed algorithms for IFS compression and coefficient quantization on the MPP APE100/Quadrics machine. Keywords: IFS coding, Simulated Annealing, Quantization, Iterative optimization algorithm, Massively Parallel Processing 1.INTRODUCTION Image compression fractal techniques were introduced by Barnsley [1]. The image is represented through a piecewise linear contractive function F and is reconstructed by iteratively applying F to a randomly chosen starting image: this technique is called the Iterated Function System (IFS). Compression is achieved exploiting as much as possible the autosimilarities in the image. IFS has been widely used (see, for example, [1], [7], [12],[13]) because of the high compression ratios (CR) achievable. In order to obtain very high CR, IFS code requires that coefficient representing the contractive function will be represented by very few bits: this fact causes a degradation of the image which can be strongly reduced through a careful choice of quantization functions. To our knowledge, quantization of IFS coefficients through non linear functions was never addressed in the literature. In this work we present two algorithms to determine near optimal non linear quantization functions: the first one is based on the time consuming Simulated Annealing optimization algorithm [9] and is used as a reference to test the capabilities of the second fast iterative quantization algorithm, which we developed on the basis of the vector quantization techniques presented in [11] (LBG algorithm). IFS coding has a lot of interesting features: high CR, fast decoding time (faster than JPEG), few blocking artifacts (less than JPEG for the same CR, see [3], [14]). The main drawback of IFS is the very high computing time needed in the compression phase (i.e. the solution of the so-called IFS inverse problem): in fact, a NxN image is partitioned into nxn blocks (called range blocks R i ) and, for each block R i , the contractive function w i and the 2nx2n blocks D k which minimize 5 Z ' - ( ) are searched. For example, the determination of the IFS code for a 512x512 image with 8x8 range blocks requires about 325x10 9 floating point operations (flops): such a very huge number of flops clearly explains the complexity of the exact solution of the IFS inverse problem. In order to reduce the number of computations required, some sub-optimal techniques have been proposed. For example, in [12] and [5] the search is not performed over the whole space (the domain pool, DP) but in a small subset of DP (determined through a neighbourhood basis). An alternative way to reduce DP is classification [7] [8]: in such a the case dimensions of DP are reduced by considering only elements belonging to a certain class. A completely different approach to speedup the coding phase is the Nearest-neighbour search [10], [16], based on the interpretation of the nxn sub-images to encode as vectors of a nxn Euclidean space and on a logarithmic searching procedure defined on this space; in such a case it is not possible to take into account the effect of quantization on the coefficient parameters. Speedup procedures can also use the fast convolution method suggested in [17]. As we showed in [15], massively parallel processing is a feasible and practical way to afford the exact solution of