Performance Enhancement of Distributed Quasi Steady- State Genetic Algorithm Rahila Patel M.M.Raghuwanshi M.Tech. IV Sem, CSE, GHRCE, Nagpur (m.s.), India NYSS College of Engineering and Research, Nagpur (m.s.), India rahila.patel@gmail.com m_raghuwanshi@rediffmail.com Anil N. Jaiswal Urmila Shrawankar PG Deptt. CSE,GHRCE, Nagpur PG Deptt. CSE,GHRCE, Nagpur jaiswal_an@yahoo.com urmilas@rediffmail.com Abstract This paper proposes a new scheme for performance enhancement of distributed genetic algorithm (DGA). Initial population is divided in two classes i.e. female and male. Simple distance based clustering is used for cluster formation around females. For reclustering self-adaptive K-means is used, which produces well distributed and well separated clusters. The self-adaptive K-means used for reclustering automatically locates initial position of centroids and number of clusters. Four plans of co-evolution are applied on these clusters independently. Clusters evolve separately. Merging of clusters takes place depending on their performance. For experimentation unimodal and multimodal test functions have been used. Test result show that the new scheme of distribution of population has given better performance. 1. Introduction K-means algorithm is widely used unsupervised partitioning based clustering algorithm. MacQueen proposed k-means in 1967[1]. It uses the Heuristic information to make the search more objective in order that the searching efficiency is improved. Simply speaking it is an algorithm to cluster objects based on attributes/features into K number of group. K is positive integer number. The grouping is done by minimizing the sum of squares of distances between data and the corresponding cluster centroid. The traditional k-means cluster algorithm has its inherent limitations: 1. Random initialization could lead to different clustering results, even in worst case no result. 2. The algorithm is based on objective function, and usually takes the Gradient method to solve problem. As the Gradient method searched along the direction of energy decreasing (minimization of square error) , that makes the algorithm get into local optimum, and sensitive to isolated points [2]. So the improvement on K-means aims at two aspects: optimization of initialization and improvement on global searching capability. Distributed GA (DGA) is one of the most important representatives of methods based on spatial separation. The basic idea of DGA lies in the partition of the population into several subpopulations, each one of them being processed by a GA, independently from the others. Furthermore, an operator, called migration, produces a chromosome exchange between the sub- populations. Its principal role is to promote genetic diversity, and in essence, to allow the sharing of possible solutions. DGA show two determinant advantages: 1) the preservation of the diversity due to the semi-isolation of the subpopulations, which may stand up to the premature convergence problem, and 2) they may be easily implemented on parallel hardware, obtaining, in this way, substantial improvements on computational time. [3] As GA implements the idea of evolution, it is natural to expect adaptation to be used not only for finding solutions to a problem, but also tuning the algorithm for the particular problem. Traditional real- coded genetic algorithm (RCGA) has parameters that must be specified before the RCGA is run on a problem. Setting parameter value correctly is a hard task. In general, there are two major forms of setting parameter values: parameter tuning and parameter control [4]. Parameter tuning is the usual approach of finding good parameter values before the run of the GA and then these static values are used during the whole run. Parameter control is different because it changes initial parameter values during the run. Parameter control itself has three variants: deterministic, adaptive and self-adaptive. Deterministic means that parameters