International Journal of Computer Applications (0975 8887) Volume 53No.10, September 2012 28 An Efficient Denoising Model based on Wavelet and Bilateral Filters Sudipta Roy Department of IT Assam University Silchar - 788011 Nidul Sinha Electrical Engineering Department National Institute of Technology Silchar - 788010 Asoke K. Sen Department of Physics Assam University, Silchar - 788011 ABSTRACT This paper investigates different models developed through hybridization of wavelet and bilateral filters for denoising of variety of noisy images. Hybridization between wavelet thresholding and bilateral filter is done in different configurations. The models are experimented on standard images like Lena, Barbara, Einstein and satellite as well as astronomical telescopic images and their performances are evaluated in terms of peak signal to noise ratio (PSNR) and image quality index (IQI). Out of number of trial models developed, only 25 models are reported as the performance of the rest models are too poor to be reported. Results demonstrate that use of bilateral filters in combination with wavelet thresholding filters in different ways on decomposed subbands deteriorates the performance. But the application of bilateral filter before or after or both before and after decomposition enhances the performance. Specifically, the filter developed with bilateral filter before decomposition of an image is found to give uniform and consistent results on all the images. General Terms Image Denoising Keywords Image denoising, wavelet transform, wavelet thresholding, bilateral filter. 1. INTRODUCTION The success of the modern age applications like video broadcasting, the medical imaging or the technological research in telescopic imaging, satellite imaging or geographic information system totally depend on the quality of the digital images [1]. There are different sources of noise that may contaminate any digital image and degrade the quality. The overall noise characteristics in an image depends on many factors, namely type of sensor, pixel dimensions, temperature, exposure time, and ISO speed of the sensor [2]. Among those, dark current noise is generated due to the thermally generated electrons at sensor sites. It is proportional to the exposure time and highly dependent on the sensor temperature. Shot noise has the characteristics of Poisson distribution. It is generated due to the quantum uncertainty in the generation of photoelectron. Amplifier noise and quantization noise occur during the conversion of number of electrons to pixel intensities. Imperfect instruments, problems with the data acquisition process, and interfering natural phenomena also can cause the degradation. Moreover, noise can be generated by transmission errors and compression [1]. Noise is also colour or channel dependent. Typically, green channel is the least noisy whereas blue channel is the noisiest. In single-chip digital camera, demosaicking algorithms are used to interpolate missing color components. That means in general noise is not white. Noise in a digital image has low- frequency as well as high frequency components. Though the high-frequency components can easily be removed, it is very challenging to eliminate low frequency noise as it is difficult to distinguish between real signal and low-frequency noise. Most of the natural images are assumed to have additive random noise, which is modeled as Gaussian type. Speckle noise [3] is observed in ultrasound images, whereas Rician noise [4] affects MRI images. Thus, denoising is often a necessary and the first step to be considered before the image data is analyzed. It is necessary to apply an efficient denoising technique to compensate for any data corruption. The goal of denoising is to remove the noise while preserving the important image features as much as possible. Linear filtering techniques, such as Wiener filter or match filter, have been used for this purpose for many years. But linear filters may result in some problems, such as blurring sharp edges, destroying lines and other fine image details, and fail to effectively remove heavy tailed noise. This calls for alternatives, like nonlinear filtering. Many literatures [5-7] have emerged on image denoising using nonlinear filters. Thresholding algorithm in an orthogonal transform domain, such as subband or wavelet transform, is a nonlinear filter. Subband transform with orthogonal perfect reconstruction filter-banks is an orthogonal transform. It is known that the subband filters act as a set of discrete time based functions in a vector space and the decomposition of signal is just to project the signal onto these base functions. As for a signal with noise, there are some differences between the coefficients of original signal and noise because of their different features. In general, if an orthogonal transform with high-energy compaction and de-correlation properties is used, most of the energy of the original signal will be compacted into a few high magnitude coefficients [8, 9]. If the image data is corrupted by additive white noise, components that correspond to noise will be distributed among low magnitude high frequency components. Most of the coefficients of noise are of smaller amplitudes. So, it is reasonable to eliminate the noise by comparing all the coefficients with a threshold and cutting off those coefficients with smaller than the threshold values [10, 11]. In recent years, lots of works have been reported on the use of wavelet transform not only in image processing but also in various fields of signal processing. It has the advantage of using variable size time-windows for different frequency bands. This results in a high frequency resolution in low bands and low frequency resolution in high bands. Consequently, wavelet transform is a powerful tool for modeling non-stationary signals that exhibit slow temporal