International Journal of Computer Applications (0975 – 8887) Volume 34– No.5, November 2011 37 Image Retrieval using Contourlet Transform Swapna Borde Vidyavardhini’s College of Engineering and Technology, Vasai (W), Dr. Udhav Bhosle Rajiv Gandhi Institute of Technology, Mumbai, ABSTRACT The image retrieval problem has recently become more important and necessary because of the rapid growth of multimedia databases and digital libraries. Different search engines use different features to retrieve images from the database. In this paper, the Contourlet Transform is developed to retrieve similar images from the image database. By combining the Laplacian pyramid and the Directional Filter Bank (DFB), a new image representation is obtained. The direction subbands coefficients are used to form a feature vector for classification. The performance of the Contourlet Transform is evaluated using standard bench marks such as Precision and Recall. An experiment shows that the Contourlet Transform (CT) features provide the best results in Image Retrieval. Keywords Content Based Image Retrieval (CBIR), Contourlet Transform (CT), Laplacian Pyramid (LP), Directional Filter Bank (DFB) 1. INTRODUCTION The image retrieval system is to retrieve a set of images from a collection of images such that this set meets the user’s requirements. The user’s requirements can be specified in terms of similarity to some other image or a sketch, or in terms of keywords. An image retrieval system provides the user with a way to access, browse and retrieve efficiently and possibly in real time, from these databases [6], [7],[9]. Image retrieval systems can be divided into two main types: Text Based Image Retrieval and Content Based Image Retrieval. In the early years Text Based Image Retrieval was popular, but nowadays Content Based Image Retrieval has been a topic of intensive research in the recent years [10]. Manual Annotation or text based image retrieval system is the traditional image retrieval system. In traditional retrieval systems features are added by adding text strings describing the content of an image. The annotation of each image with its corresponding keywords is manually performed. The annotation of images is a tedious process and in case of very large databases, it is not feasible for a person to annotate all the images. Additionally, it is a slow and time consuming process to annotate a large database. This problem can be solved by using CBIR [12]. Content Based Image Retrieval (CBIR) attracted many researchers of various fields in effort to automate data analysis and indexing. CBIR is like filter information Process and it is used to provide a high percentage of relevant images in response to the query image. In a CBIR system, features are used to represent the image content. The features are extracted automatically and there is no manual intervention, thus eliminating the dependency on humans in the feature extraction stage [4], [5]. We consider a simple architecture of a typical Content Based Image Retrieval (CBIR) (Fig. 1), where there are two major tasks. The first one is feature extraction (FE), where a set of features, called image signatures, is generated to accurately represent the content of each image in the database. A signature is much smaller in size than the original image, typically of the order of hundreds of elements (rather than millions). The second task is similarity measurement (SM), where a distance between the query image and each image in the database using their signatures is computed so that the top “closest” images can be retrieved [1], [2], [3]. Recent CBIR systems are used to retrieve images based on visual properties such as color, shape, texture, etc. One particularly promising approach to image database indexing and retrieval is the query by image content (QBIC) method. Whereby the visual contents of the images such as color distribution (color histogram), [10] texture attributes and other image features are extracted from the image using computer vision/image processing techniques and used as indexing keys. In an image database, these visual keys are stored along with the actual imagery data and image retrieval from the database is based on the matching of the models visual keys with those of the query images. Because extra information has to be stored with the images, traditional approach to QBIC is not efficient in terms of data storage. Not only it is inefficient it is also inflexible in the sense that image matching / retrieval can only based on the pre-computed set of image features [1].