Multi-objective Combinatorial Optimisation with Coincidence Algorithm Warin Wattanapornprom, Panuwat Olanviwitchai, Parames Chutima, and Prabhas Chongstitvatana. Abstract — Most optimization algorithms that use probabilistic models focus on extracting the information from good solutions found in the population. A selection method discards the below-average solutions. They do not contribute any information to be used to update the models. This work proposes a new algorithm, Combinatorial Optimization with Coincidence (COIN) that makes use of both good and not-good solutions. A Generator represents a probabilistic model of the required solution, is used to sample candidate solutions. Reward and punishment schemes are incorporated in updating the generator. The updating values are defined by selecting the good and not-good solutions. It has been observed that the not- good solutions contribute to avoid producing the bad solutions. The multi-objective version of COIN is also introduced. Several benchmarks of multi-objective problems of real world industrial applications are reported. I. INTRODUCTION OD does not play dice, coincidence is god’s way of remaining anonymous.” – Albert Einstein has left challenge to solve the mysteries of the coincidences in the universe. EDA algorithms try to extract the knowledge found in the solutions in order to reproduce the better solution. According to Minsky [1], negative knowledge hidden in seemingly positive knowledge plays a controlling role in diverse areas including expert system, emotion, and search. Combinatorial Optimization with Coincidence (COIN) algorithm adopts the negative knowledge to enhance the search by avoiding the reproduction of undesired solutions. This paper, we introduce the COIN algorithm which is invented to solve single-objective problems, and then present the adaptation of COIN in multi-objective problems. The structure of the paper is as follows. The related works are discussed in Section II. The proposed algorithm, Combinatorial Optimization with Coincidence, is explained in Section III. Section IV introduces the multi-objective version of COIN. The experiments are reported and the results are discussed in Section V. Finally, Section VI concludes the work. W. Wattanapornprom and P. Chongstitvatana are with the Department of Computer Engineering, Faculty of Engineering Chulalongkorn University, Thailand (e-mail: yongkrub@gmail.com and prabhas@chula.ac.th and). P. Olanviwitchai and P. Chutima are with the Department of Industrial Engineering, Faculty of Engineering Chulalongkorn University, Thailand (e-mail: househomeme_1234@hotmail.com and parames.c@chula.ac.th). II. RELATED WORKS There are many algorithms that use the second order statistic and considered to be the algorithms in Bivariate Dependency class in Estimation of Distribution Algorithms. These algorithms take dependencies between pairs of variables into account. The algorithms in this class include MIMIC [2], COMIT [3] and BMDA [4]. A. MIMIC One of the most famous algorithms in the bivariate dependency class is MIMIC (Mutual Information Maximizing Input Clustering), proposed by De Bonet et al. in 1997. A greedy algorithm that searches in each generation for the best permutation between the variables in order to find the probability distribution, that is closest to the empirical distribution of the set of selected points when using the Kullback-Leibler distance, where (1) and = (i 1 ,i 2 ,…,i n ) denotes a permutation of the indexes 1,2,….,n. This algorithm avoids searching through all n! permutations by selecting X in as the variable with the smallest estimated entropy then, at every following step, to pick the variable from the set of variables not chosen so far whose average conditional entropy with respect to the variable selected in the previous step is the smallest. B. COMIT The dependency tree version of PBIL [5] is later called COMIT(Combining Optimizers with Mutual Information Tree). The algorithm was proposed by Baluja and Davies[3][6]. The algorithm constructs dependency trees and incrementally learns from the good seen solution so far using the algorithm proposed by Chow and Liu [7]. C. BMDA Pelikan and Mühlenbein proposed an algorithm call BMDA (Bivariated Marginal Distribution Algorithm) using factorization of the joint probability distribution. It is based on the construction of a dependency graph, which is always acyclic but does not have to be a connected graph. BMDA adds the dependency to the graph using the greatest dependency between any of the previously incorporated variables and the set of not yet added variables. “G 1675 978-1-4244-2959-2/09/$25.00 c 2009 IEEE Authorized licensed use limited to: Chulalongkorn University. Downloaded on August 25, 2009 at 03:07 from IEEE Xplore. Restrictions apply.