Abstract—The scientific community has invested a great deal of effort in the fields of discrete wavelet transform in the last few decades. Discrete wavelet transform (DWT) associated with the vector quantization has been proved to be a very useful tool for the compression of image. However, the DWT is very computationally intensive process requiring innovative and computationally efficient method to obtain the image compression. The concurrent transformation of the image can be an important solution to this problem. This paper proposes a model of concurrent DWT for image compression. Additionally, the formal verification of the model has also been performed. Here the Symbolic Model Verifier (SMV) has been used as the formal verification tool. The system has been modeled in SMV and some properties have been verified formally. Keywords—Computation Tree Logic, Discrete Wavelet Transform, Formal Verification, Image Compression, Symbolic Model Verifier. I. INTRODUCTION HE research in compression techniques has stemmed from the ever increasing need for efficient data transmission, storage and utilization of hardware resources. Uncompressed image data require considerable storage capacity and transmission bandwidth. Despite rapid progresses in mass storage density, processor speeds and digital communication system performance demand for data storage capacity and data transmission bandwidth continues to outstrip the capabilities of available technologies. The recent growth of data intensive multimedia based applications have not only sustained the need for more efficient ways to encode signals and images but have made compression of such signals central to signal storage and digital communication technology. Compressing an image is significantly different from compressing raw binary data. Of course, general purpose compression programs can be used to compress images, but the result is less than optimal. This is because images have certain statistical properties which can be exploited by encoders specifically designed for them. Also, some of the Manuscript received November 15, 2007 Kamrul Hasan Talukder is a Graduate student in the Department of Information Engineering of the Graduate School of Engineering in Hiroshima University, Japan. (e-mail: khtalukder@hiroshima-u.ac.jp). Koichi Harada is a Professor in the Department of Information Engineering of the Graduate School of Engineering in Hiroshima University, Japan. (e- mail: hrd@hiroshima-u.ac.jp). finer details in the image can be sacrificed for the sake of saving a little more bandwidth or storage space. Lossless compression involves with compressing data which, when decompressed, will be an exact replica of the original data. This is the case when binary data such as executables documents etc. are compressed. They need to be exactly reproduced when decompressed. On the other hand, images need not be reproduced 'exactly'. An approximation of the original image is enough for most purposes, as long as the error between the original and the compressed image is tolerable. The neighboring pixels of most of the images are highly correlated and therefore hold redundant information from certain perspective of view [1]. The foremost task then is to find out less correlated representation of the image. Image compression is actually the reduction of the amount of this redundant data (bits) without degrading the quality of the image to an unacceptable level [2] [3] [4]. There are mainly two basic components of image compression - redundancy reduction and irrelevancy reduction. The redundancy reduction aims at removing duplication from the signal source image while the irrelevancy reduction omits parts of the signal that is not noticed by the signal receiver i.e., the Human Visual System (HVS) [5] which presents some tolerance to distortion, depending on the image content and viewing conditions. Consequently, pixels must not always be regenerated exactly as originated and the HVS will not detect the difference between original and reproduced images. The current standards for compression of still image (e.g., JPEG) use Discrete Cosine Transform (DCT), which represents an image as a superposition of cosine functions with different discrete frequencies [6]. The DCT can be regarded as a discrete time version of the Fourier Cosine series. It is a close relative of Discrete Fourier Transform (DFT), a technique for converting a signal into elementary frequency components. Thus, DCT can be computed with a Fast Fourier Transform (FFT) like algorithm of complexity O(nlog 2 n). More recently, the wavelet transform has emerged as a cutting edge technology within the field of image analysis. The wavelet transformations have a wide variety of different applications in computer graphics including radiosity [7], multiresolution painting [8], curve design [9], mesh optimization [10], volume visualization [11], image searching [12] and one of the first applications in computer graphics, A Scheme of Model Verification of the Concurrent Discrete Wavelet Transform (DWT) for Image Compression Kamrul Hasan Talukder and Koichi Harada T World Academy of Science, Engineering and Technology International Journal of Computer and Information Engineering Vol:3, No:11, 2009 2599 International Scholarly and Scientific Research & Innovation 3(11) 2009 ISNI:0000000091950263 Open Science Index, Computer and Information Engineering Vol:3, No:11, 2009 publications.waset.org/5481/pdf