www.ijcait.com International Journal of Computer Applications & Information Technology Vol. 2, Issue II Feb-March 2013 (ISSN: 2278-7720) Page | 1 A Novel Technique to Image Annotation using Neural Network Pankaj Savita Deepshikha Patel Amit Sinhal TIT Bhopal Professor TIT Bhopal Professor, TIT Bhopal Abstract: Automatic annotation of digital pictures is a key technology for managing and retrieving images from large image collection. Traditional image semantics extraction and representation schemes were commonly divided into two categories, namely visual features and text annotations. However, visual feature scheme are difficult to extract and are often semantically inconsistent. On the other hand, the image semantics can be well represented by text annotations. It is also easier to retrieve images according to their annotations. Traditional image annotation techniques are time-consuming and requiring lots of human effort. In this paper we propose Neural Network based a novel approach to the problem of image annotation. These approaches are applied to the Image data set. Our main work is focused on the image annotation by using multilayer perceptron, which exhibits a clear-cut idea on application of multilayer perceptron with special features. MLP Algorithm helps us to discover the concealed relations between image data and annotation data, and annotate image according to such relations. By using this algorithm we can save more memory space, and in case of web applications, transferring of images and download should be fast. This paper reviews 50 image annotation systems using supervised machine learning Techniques to annotate images for image retrieval. Results obtained show that the multi layer perceptron Neural Network classifier outperforms conventional DST Technique. General Term- Pattern Recognition Keywords: Image Annotation, Neural Network, MLP, DST 1. INTRODUCTION Nowadays, the number of digital images is growing with an incredible speed, which makes the image management very challenging for researchers. Automatic image annotation aims to develop methods that can predict the relevant keywords from an annotation vocabulary for a new image. The final goal of automatic image annotation is to assist image retrieval by supplying semantic keywords for search. This capability makes large image database management easy. The image annotation has been extensively researched for more than a decade. There are mainly two methods for automatic image annotation: Statistics models and Classification approaches. Statistics models annotate images by computing the joint probability between words and image features. Image Annotation is regarded as a type of multi-class image classification with a very large number of classes, as larger as the vocabulary size. Therefore, automatic image annotation can be considered as a multi-class object recognition problem which is an extremely challenging task and still remains an open problem in computer vision. In spite of many algorithms proposed with different motivations, the underlying questions are still not well solved- 1) Most of the automatic image annotation systems utilize a single set of features to train a single learning classifier. The problem is: A single feature set, which represents an image category well, may fail to represent other categories. For example, the semantic word “flower” and “tree” may be different in color, so color features may work best, but for “tree” and “grass”, which has similarity in color may be distinguished by texture features. Such kind of problem degrades the performance of the automatic image annotation system when, the number of categories increase. 2) For each image, we often have keywords assigned with the whole image. Here it is not known which regions of the image correspond to these keywords.In this paper, we propose a novel automatic image Annotation system which can tackle the problems mentioned above: 1) our algorithm combines different kinds of feature descriptors to boost the annotation.2) Segments each image into several regions, and establishes one-to-one correspondence between image region and annotation keyword. 2. LITERATURE SURVEY Recently, a number of models have been proposed for image annotation. One of the first attempts at image annotation was reported by Mori et al. [1], who tiled images into grids of rectangular regions and applied a co-occurrence model to words and low-level features of such tiled image regions. Duygulu et al. [11], described images using a vocabulary of blobs. First, regions are created using a segmentation algorithm like normalized cuts [6]. For each region, features are computed and then blobs are generated by clustering the image features for these regions across images. Each image is generated by using a certain number of these blobs. Their Translation Model applies one of the classical statistical machine translation models to translate from the set of blobs forming an image to the set of keywords of an image.Tsai and Hung [2] reviewed 50 image annotation systems using supervised machine learning techniques to annotate images via mapping the low-level or visual features to high-level concepts or semanticsVailaya et al.[3] proposed a hierarchical classification scheme to first classify images into indoor or outdoor categories, then, outdoor images are further classified as city or landscape. Finally, landscape images are classified into sunset, forest, and mountain classes. In other words, three Bayes classifiers are used for the three-stage classification.Correlation LDA proposed by Blei and Jordan [4] extends the Latent Dirichlet Allocation (LDA) Model to words and images. This model assumes that a Dirichlet distribution can be used to generate a mixture of latent factors. This mixture of latent factors is then used to generate words and regions. Expectation-Maximization is used to estimate this model. In Datta et al. [5], the authors surveyed almost 300 key theoretical and empirical contributions related to image retrieval and automatic image annotation and their subfields. They also