A Neural Approach for Color-Textured Images Segmentation Khalid Salhi, El Miloud Jaara, Mohammed Talibi Alaoui Abstract—In this paper, we present a neural approach for unsupervised natural color-texture image segmentation, which is based on both Kohonen maps and mathematical morphology, using a combination of the texture and the image color information of the image, namely, the fractal features based on fractal dimension are selected to present the information texture, and the color features presented in RGB color space. These features are then used to train the network Kohonen, which will be represented by the underlying probability density function, the segmentation of this map is made by morphological watershed transformation. The performance of our color-texture segmentation approach is compared first, to color-based methods or texture-based methods only, and then to k-means method. Keywords—Segmentation, color-texture, neural networks, fractal, watershed. I. I NTRODUCTION S EGMENTATION is a fundamental and important step for any attempt to interpret or analyze an image automatically. This technique aims to divide an image into homogeneous regions according to certain criteria (intensity, color, texture ), it is the core of any application involving the recognition and detection of objects in images. The application of the segmentation generally involves two steps, the first one is to extract the features for each pixel in the image, and the second is to use these features to determine the uniform regions in the image. In this paper, we present an unsupervised segmentation approach combining the texture and color features, the first step is to extract from each pixel a local fractal features vector using the differential box counting method. In order to have the vector that characterizes the color-texture information, the fractal feature vectors are concatenated with the color vectors represented in the RGB color space. After calculating the color-texture features, we first place the feature vector of each pixel into the feature space which forms a cloud of observations and we make a projection of these on a self-organizing map. To help extract the homogeneous regions in this map, we present in the first stage the information in each cell of this map by the probability density function value (PDF) estimated by a nonparametric procedure, in the second stage we extract automatically the modal regions using watershed transformation. The classification stage is to take the weight vectors corresponding to the modal regions detected as prototypes of homogeneous regions in the image. Weights from each of these prototypes are the basis of the assignment of any pixel of the image to one of the classes extracted. Khalid Salhi, Ph.D. Student, is with the Department of Computer Sciences, Faculty of Sciences, University of Mohammed First, Oujda, Morroco (e-mail: salhi.0.khalid@gmail.com). In the last section, we present a comparison of results obtained using only the texture or color with the results obtained by the combination color-texture, finally we test the efficiency of our segmentation approach with the k-means method. II. FEATURE EXTRACTION In order to plot the pertinent attributes that characterize better the objects, each classification process starts with an acquisition step of observation. In this study, we use a combination of texture and color features, first, we extract the texture information from each pixel using Fractal Dimension calculated by the differential box counting method, and then we combine this Fractal features with the color features represented on RGB color space. A. Fractal Features In the 70s, fractal geometry saw the light of existence offering us new concepts so that we can finally understand some complex phenomena that we haven’t been able to comprehend, fractal concept application fields are numerous, including image analysis. When it comes to image analysis application, fractal geometry is mostly used throughout the concept of fractal dimension (FD), however in this study we have chosen to work with the differential box counting method [1], [2], as it can be computed and applied to patterns with or without self-similarity. The stages used by the differential box counting method mentioned above begin by partitioning the image space into boxes of different sizes r, secondly the probability N(r) is calculated as the difference between the maximum and minimum gray levels for each one of the boxes, afterwards the fractal dimension is estimated using the following equation: FD = lim r0 ln[N (r)] ln( 1 r ) (1) To compute the fractal dimension of a pixel I (i, j ) for image I , we use the local m × m pixel window W (i, j ): 1) For various r [0, 1]. Divide W (i,j) into (1/r) 2 boxes. Divide the range of intensities [0..255] into 1/r levels numbered 1..1/r. For each box b(p, q) W (i,j) do: a) l minimum( b(p, q)) b) k maximum( b(p, q)) c) n p,q (r) l k +1 World Academy of Science, Engineering and Technology International Journal of Computer and Information Engineering Vol:10, No:10, 2016 1847 International Scholarly and Scientific Research & Innovation 10(10) 2016 scholar.waset.org/1307-6892/10005626 International Science Index, Computer and Information Engineering Vol:10, No:10, 2016 waset.org/Publication/10005626