International Journal of Emerging Engineering Science and Technology Volume 1 Issue 2 -2015 www.ijeest.com Page | 115 IMAGE SEGMENTATION APPROACH FOR NATURAL IMAGES USING DT CWT V. Krishna Naik 1, Dr.G.Manoj Someswar2 1 Research Scholar, (PhD) Department Of ECE, Mewar University,Chittorgarh, Rajasthan, India, 2 Principal & Professor, Department of CSE, Anwar-ul-uloom College of Engineering&Tecnology, India, Abstract: This paper focuses on image segmentation; the goal of any image segmentation process is to assign to each pixel in an observed image a label indicating to which region or class that pixel belongs. Fully automated or unsupervised segmentation is an ill-posed problem and so in order to constrain the solution an example image set whose content is similar to that to be segmented is given as an input. This example image set has been segmented a priori and so can be used to guide the segmentation process. This type of semi-automated segmentation can be viewed as the interleaving of segmentation and object recognition. As part of this work on example based processing, a new segmentation algorithm has been developed. This example based segmentation algorithm is based on the same implicit technique as the synthesis process. However, in order to regularize the solution, implicit modeling of the observed image is combined with an explicit modeling of the label field. The Bayesian framework provides a natural expression for such parallel modeling techniques. The new algorithm is presented under this framework and some sample segmentation results are given. Index Terms: Segmentation, Wavelets, DWT, neighborhoods, Mignotte I. AN OVERVIEW OF IMAGE AND IMAGE SEGMENTATION Once the task of the skilled professional, the popularity of the digital camera and the multitude of photo editing suites that exist, has brought even the most complicated image processing operation within a mouse click of the creative consumer. Fuelled by the commercial success of image processing software and coupled with the ever increasing demands of the professional image editor, the research area of image processing is one of the most vibrant sectors of information technology. Image processing is a blanket term that can be used to describe any operation that acts to improve, correct, analyse, manipulate or render an image. In this paper explained mechanism by which an image is manipulated or analyzed is influenced directly by a set of example images. The driving force behind in this paper processing is that many complicated image processing tasks can be simplified considerably if some information on the desired effect or outcome is given as an input. This paper demonstrates the strength of example based image processing by focusing on image segmentation based traditional image processing operations. Moving on from the work on texture synthesis, the focus of this paper then turns toward the problem of image segmentation. As a formal description, the aim of a segmentation process is to assign to each pixel in an image a label indicating to which region or class that pixel belongs. In automated segmentation no prior information and only pixel information is inputted into the segmentation process. By nature fully automated segmentation is an ill posed problem1 which ultimately offers no means of judgment on the outcome. As a compromise much of the recent segmentation research has been focused on semi-automated segmentation where some clue as to the image content is given as an input . Semi-automated segmentation can be considered as the interleaving of object recognition and segmentation and the task is now that of: given an example object, does this object exist in the image and if so isolate and label it. The adroitness of the human visual system (HVS) means that humans can ascertain the similarity between two objects within fractions of a second. However, as is often the case in image processing, devising an algorithm which can replicate artificially on a computer what the HVS does subconsciously is by no means a trivial task. In order to make the problem more manageable, it is broken down into smaller components, the first of which is how to characterize or describe each object. There have been many different object descriptors proposed. For example, in Video Google and OBJ CUT, objects are described in terms of their shape and intensity, the Magic Wand tool from Adobe Photoshop 7 uses colour intensity. It was found that characterizing and identifying objects using their texture component is the most robust since by definition texture is composed of both intensity and spatial information. This is the approach taken by Mignotte and the approach taken here will follow a similar vein. Identifying and modeling objects in terms of their texture component allows the problem of object recognition to be re-formulated as that of texture discrimination and distinction. Under this formalization