IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 5, OCTOBER 2010 801 The r-Dominance: A New Dominance Relation for Interactive Evolutionary Multicriteria Decision Making Lamjed Ben Said, Slim Bechikh, and Khaled Ghédira Abstract —Evolutionary multiobjective optimization (EMO) methodologies have gained popularity in finding a representative set of Pareto optimal solutions in the past decade and beyond. Several techniques have been proposed in the specialized liter- ature to ensure good convergence and diversity of the obtained solutions. However, in real world applications, the decision maker is not interested in the overall Pareto optimal front since the final decision is a unique solution. Recently, there has been an increased emphasis in addressing the decision-making task in searching for the most preferred alternatives. In this paper, we introduce a new variant of the Pareto dominance relation, called r-dominance, which has the ability to create a strict partial order among Pareto-equivalent solutions. This fact makes such a rela- tion able to guide the search toward the interesting parts of the Pareto optimal region based on the decision maker’s preferences expressed as a set of aspiration levels. After integrating the new dominance relation in the NSGA-II methodology, the efficacy and the usefulness of the modified procedure are assessed through two to ten-objective test problems a priori and interactively. Moreover, the proposed approach provides competitive and better results when compared to other recently proposed preference- based EMO approaches. Index Terms—Decision maker’s preferences, evolutionary algo- rithms, interactive multiobjective optimization, Pareto domi- nance, reference point method. I. Introduction M OST REAL world problems usually involve several in- commensurable and conflicting objectives to optimize under certain constraints. Consequently, there is not a single solution optimizing simultaneously each objective to its fullest, but we are often looking for a set of trade-off solutions. Such problems are termed multicriteria, multiattribute, or multiobjective optimization problems (MOPs) [1]. This kind of problems has received considerable attention in Operations Research [2]. Over the two last decades and beyond, evolu- tionary algorithms (EAs) have earned popularity in solving MOPs for two main reasons: 1) EAs are able to provide a set of compromise solutions as output on a single run; Manuscript received June 3, 2009; revised October 4, 2009 and November 27, 2009. Date of publication April 22, 2010; date of current version October 1, 2010. This work was supported by the Intelligent Information Engineering Laboratory (LI3), High Institute of Management of Tunis, University of Tunis, Tunis, Tunisia. The authors are with the Intelligent Information Engineering Laboratory (LI3), High Institute of Management of Tunis, University of Tunis, Tunis 2000, Tunisia (e-mail: lamjed.bensaid@isg.rnu.tn; slim.bechikh@gmail.com; khaled.ghedira@isg.rnu.tn). Digital Object Identifier 10.1109/TEVC.2010.2041060 and 2) EAs are insensitive to the shape of the objective functions such as non-convexity, discontinuity, multimodality, non-uniformity of the search space, and so on [3]. Among the most known multiobjective evolutionary algorithms (MOEAs), we cite the non-dominated sorting genetic algorithm NSGA-II [4], the strength Pareto approach SPEA2 [5], and the Pareto archived evolutionary strategy [6] which have shown very good results in approximating the whole Pareto set for different continuous problems (e.g., the bi-objective Zitzler, Deb, and Thiele (ZDT) suite [7] and the scalable Deb, Thiele, Laumans, and Zitzler (DTLZ) suite [8]) and different combinatorial problems (e.g., the multiobjective knapsack problem [9]). The final goal of MOEAs is to assist the decision maker (DM) to select the final solution which matches at most his/her pref- erences. Since MOEAs supply the DM with a huge number of solutions, it seems to be a difficult task to choose the final preferred alternative. In order to facilitate the decision making task, the DM would like to incorporate his/her preferences into the search process. These preferences are used to guide the search toward the preferred parts of the Pareto region. Preferences could be integrated in three ways: 1) a priori: where the preferences are injected before the beginning of the search; 2) a posteriori: where the preferences are used after the end of the search to choose the final solution from the supplied set of compromise solutions; and 3) interactively: where the preferences are injected during the search in an interactive manner. Most evolutionary multiobjective optimization (EMO) methodologies belong to the a posteriori family. Supplying the DM with a very large number of alternatives makes the decision making task very difficult. However, in practice, the DM is not searching for the whole Pareto optimal region rather than exploring only a subset of the Pareto set which is relevant to him/her. This preferred part of the Pareto region is called region of interest (ROI) [10] and it is illustrated in Fig. 1. The ROI is defined as the preferred part of the Pareto optimal region from the DM’s perspective. When DM’s preferences are integrated a priori, they guide the MOEA toward the ROI. But, it seems to be difficult to the DM to express his/her preferences a priori. Hence, it is interesting to articulate these preferences in an interactive manner. In this way, the DM could learn progressively about the MOP and then he/she could express his/her preferences interactively. Consequently, the DM could drive the search toward the preferred parts of the Pareto optimal region by exploiting his/her experience and 1089-778X/$26.00 c 2010 IEEE