International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014 ISSN: 2347-8578 www.ijcstjournal.org Page 89 Image Clustering Using Evolutionary Computation Techniques I. Ravi Kumar 1 , V. Durga Prasad Rao 2 M-Tech Research Scholar 1 , Associate Professor 2 Department of Computer Science and Engineering Kaushik College of Engineering Gambheeram, Vishakhapatnam Andhra Pradesh, India ABSTRACT In the cluster analysis most of the existing clustering techniques for clustering, accept the numbers of clusters K as an input and determine that many number of cluster for a given data set. The projecting technique will try to discover true number of cluster centers automatically on the run. It will not only determines the true number of the cluster centers but also extracts real cluster centers and make a good classification. The goal of feature selection for unsupervised learning is to find the smallest feature subset that best uncovers “interesting natural” groupings (clusters) from data according to the chosen criterion. There may e xist multiple superfluous feature subset solutions. We are satisfied in finding any one of these solutions. Unlike supervised learning, which has class labels to guide the feature search, in clustering (unsupervised learning) we need to define what “interesting” and “natural” mean. These are usually represented in the form of criterion functions. Keywords:- Differential evolution (DE), Evolutionary Computational Techniques (ECT), K-Means Algorithm (KA), particle swarm optimization (PSO), Partitional Clustering (PC) I. INTRODUCTION CLUSTERING is the act of partitioning an unlabeled data set into groups of similar objects. Each group is called a “cluster”, which consists of objects those are same among themselves and disparate from objects of other groups. In the past few decades, cluster analysis has played a central role in a variety of fields, ranging from engineering to social science and economics. Although an through list is impracticable it is worthwhile to mention that clustering has found applications in machine knowledge, artificial intelligence, pattern recognition, mechanical engineering and electrical engineering , web mining, spatial database exploration, textual document collection and image segmentation, genetics, biology, microbiology, paleontology, psychiatry and pathology, geography, geology and remote sensing, sociology, psychology, archeology, education, advertising and business[1]-[8]. In the cluster analysis most of the existing clustering techniques accept the number of clusters K, as an input instead of determining the same on the run. Also, if the data set is described by high-dimensional feature vectors, it may be virtually impossible to visualize the data for tracking its number of clusters. Chiefly in image pixel clustering knowing cluster number beforehand is a challenging task. A recent paper [9] has presented a new Differential Evolution (DE) based policy called ACDE (Automatic Clustering Using an Improved Differential Evolution) which is an evolutionary working out algorithm for crisp clustering of real world data sets. The important feature of this technique is that it is able to robotically find the optimal number of clusters (i.e. the number of clusters does not have to be known in advance) even for very prominent dimensional data sets, where tracking of the number of clusters may be difficult. There are various evolutionary computation techniques like genetic algorithm, Particle swarm Optimization techniques, Evolutionary Strategy etc can be very well implemented to address the problem of automatic clustering. In our proposed work we have envision to realize few of these techniques and develop some interesting hybridization of these approaches for effective image pixel clustering. II. BRIEF REVIEW OF EXISTING WORK Data clustering algorithms can be hierarchical or partitioned [10], [11]. Within each of the types, there exist a large number of subtypes and different algorithms for finding the clusters. In hierarchical clustering, the output is a tree showing a sequence of clustering, with each cluster being a partition of the data set [11]. Hierarchical algorithms can be agglomerative (bottom-up) or divisive (top-down). Agglomerative algorithms begin with each element as a separate cluster and merge them in successively larger clusters. Disruptive algorithms begin with the whole set and proceed to divide it into one after another smaller clusters. Hierarchical algorithms have two basic advantages [10]. First, the number of classes need not be specified a priori, and second, they are sovereign of the initial conditions. However the main drawback of hierarchical clustering techniques is that they are static; that is, data points assigned to a cluster cannot move to another cluster. In addition to that, they fail to RESEARCH ARTICLE OPEN ACCESS