International Journal of Knowledge-based and Intelligent Engineering Systems 9 (2005) 13–20 13 IOS Press An evolutionary machine learning: An adaptability perspective at fine granularity Mohamed Ben Ali Yamina a,∗ and M.T. Laskri b a Laboratoire de Recherche en Intelligence Artificielle LRI, Universit ´ e Badji Mokhtar, Institut Informatique, BP 12, Annaba, Alg ´ erie E-mail: Benaliyam2@Yahoo.fr b Universit´ e Badji Mokhtar, Institut Informatique, BP 12, Annaba, 23000, Alg ´ eria E-mail: Laskri@yahoo.com Abstract. In what follows, we propose a new perspective of machine learning into genetic algorithms. The conceptualization of such G-reasoning relies on the semantic of adaptability to tackle efficiently large range of optimization problems. This paper intends to outperform genetic learning according to aβnearest-neighbors selection and a micro-learning schedule. Based upon an adaptation function, the learning behavior put emphasizes on adjustments of mutation rates through generations. Thus, to realize such way, two learning strategies are suggested. Commonly, the aim of this purpose is to regulate the intensity of convergence velocity along of evolution. Indeed, all mentioned requirements influence closely the performance of the algorithm. In addition to the best performance reached, comparisons are done with others evolutionary methods. Keywords: Adaptability, learning, convergence velocity, genetic algorithm, optimization 1. Introduction Inspired from human genetic and biological evolu- tion, scientist community applied genetic principles to sketch the skeleton of Evolutionary Algorithm (EA) [4, 5]. In this sense, EA recovers essentially robust theo- ries like: Evolutionary Strategies (ES) [7,9] and Evolu- tionary Programming [14,18]. However, our investiga- tion is oriented towards the pioneer theory of EA: Ge- netic algorithms (GA). Broadly, GA models a parental pool Ω(t)= {y 1 t ,y 2 t ,...,y μ t } of μ individuals by us- ing a coding scheme. In order to be more suitable for any search space (discrete or continuous), and able to tackle more optimization problems, adaptive GA were introduced [3,8,17]. Thus, the adaptation feature was exploited at different levels in the algorithm [13,15,19, 23,24], influencing closely convergence performance. The main goal of this article is to both induce a se- mantic level in the actual GA evolution and improve GA ∗ Corresponding author. convergence performance by hybridizing all adaptation levels (population, chromosome and segment). Thus, important organization is made to show the semantic viewpoint of the proposed GA architecture. The main evolution steps like selection and recombination will be parameterize to enable efficient genetic learning. Ef- fectively, the evolution will then rely on a critical and potential learning function embodied in the learning ge- netic process. In order to guide convergence steps, the learning function derives two learning strategies most important to primary adapt mutation rates. The seman- tic evolution description does not lead only towards a new genetic learning perspective, but offer also a good evaluating tool to weight convergence performance of a GA during execution To estimate the behavior of the Machine Learning based Genetic Algorithm MLGA, we take into account two approaches based real coding: ES and adaptive GA based simulated binary crossover (SBX). This choice emerges from serious studies un- dertaken by Beyer and Deb [12] to point out similarities concerning the convergence order of real coded-GA and ES. ISSN 1327-2314/05/$17.00 2005 – IOS Press and the authors. All rights reserved