Playing in the Objective Space: Coupled Approximators for Multi-Objective Optimization Harold Soh * , Ong Yew Soon † , Mohamed Salahuddin * , and Terence Hung * * Institute of High Performance Computing 1 Science Park Road, 01-01 The Capricorn Singapore Science Park II Singapore 117528 Email: sohsh@ihpc.a-star.edu.sg, mohdsh@ihpc.a-star.edu.sg, terence@ihpc.a-star.edu.sg † School of Computer Engineering Nanyang Technological University Blk N4, 2b-39, Nanyang Avenue Singapore 639798 Email: asysong@ntu.edu.sg Abstract— This paper presents a method of integrating com- putational intelligence and the operators used in evolutionary algorithms. We investigate approximation models of the objective function and its inverse and propose two simple algorithms that use these coupled approximators to optimize multi-objective functions. This method is a break from traditional approach used by standard cross-over and mutation operators, which only explore the objective space through “near-blind” manip- ulation of solutions in the parameter space. Fundamentally, our proposed intelligent operators use learned models of the coupling between the objective space and the parameter space to generate successively better solutions by extrapolating (or interpolating) from known solutions directly in the objective space. We term our implementation of the developed techniques as the Coupled Approximators Evolutionary Algorithm (CAEA). Empirical results of tests on scalable benchmark functions from the DTLZ test suite suggest that CAEA is an effective method for generating good solutions within a tight computational budget. These promising results prompt us to suggest several avenues for future research including combination with local search methods, incorporation of domain-knowledge and more efficient search algorithms. I. I NTRODUCTION The Evolutionary Algorithms (EA) arena has shown tremen- dous activity in the past few years, with advancements being made on both the theoretical and applied fronts. One particular area of interest has been the development of multi-objective evolutionary methods that are capable of optimizing expensive and difficult problems quickly and reliably. Previous work in this area has focussed mainly on using inexpensive models of the exact evaluation function. This work explores an alternative approach by merging computational intelligence methods with evolutionary algorithms to derive intelligent operators which generate candidate solutions based on models of the objective function and the function’s inverse. We term the models de- veloped as coupled approximators and present two algorithms that use these models to explore the objective space. This paper is organized as follows: The section immedi- ately following this introduction explores some relevant work on multi-objective optimization with evolutionary algorithms, fitness approximation and competent operators. Section III presents details of our proposed method, i.e., the Coupled Approximators Evolutionary Algorithm (CAEA). In Section IV, we present results and an analysis of simulations with scalable benchmark problems from the DTLZ [1] test suite. Section V conludes this paper with possible future work and conclusions derived from this study. II. BACKGROUND In this section, we cover related work in the fields of multi- objective optimization, evolutionary algorithms and fitness approximation. A. Multi-objective Optimization with Evolutionary Algorithms Intuitively, the problem of multi-objective optimization can be viewed as a search through a m-dimensional space for all minimum (or maximum) objective vectors that satisfy posed constraints. The search is usually performed “indirectly” by varying parameter vectors, also called decision vectors, in a d-dimensional space. From this point forward, we assume, without loss in generality, minimization problems. We begin by defining m-dimensional fitness space of all feasible solutions. Given an objective function f (ˆ x): R d → R m with finite range and constraint functions g(ˆ x): R d → R k and h(ˆ x): R d → R l , the feasible objective space is defined as the set of all vectors in the function’s range that satisfy the given constraints, F = { ˆ f i ∈ R m | ˆ f i = f (ˆ x i ) ∧ (g(ˆ x i ) > 0) ∧ (h(ˆ x i )= 0)}. Since our goal is to find the best solutions in a given F , it is helpful to consider what it means for one solution to be better than or dominates another. Consider ˆ f =(f 1 ,f 2 , ..., f m ), ˆ g = (g 1 ,g 2 , ..., g m ) ∈ F . ˆ f is said to dominate ˆ g, denoted as ˆ f ≻ ˆ g iff ∀i ∈{1, 2, ..., m} : f i ≤ g i ∧∃j ∈{1, 2, ..., m} : f j <g j .