1 A Flexible Classifier Design Framework based on Multi-Objective Programming Sibel Yaman, Student Member, IEEE, and Chin-Hui Lee, Fellow, IEEE Abstract— We propose a multi-objective programming (MOP) framework for finding compromise solutions that are satisfactory for each of multiple competing performance criteria in a pattern classification task. The fundamental idea for our formulation of classifier learning, which we refer to as iterative constrained optimization (ICO), evolves around improving one objective while allowing the rest to degrade. This is achieved by the optimization of individual objectives with proper constraints on the remaining competing objectives. The constraint bounds are adjusted based on the objective functions obtained in the most recent iteration. An aggregated utility function is used to evaluate the acceptability of local changes in competing criteria, i.e., changes from one iteration to the next. Although many MOP approaches developed so far are formal and extensible to large number of competing objectives, their capabilities are examined only with two or three objectives. This is mainly because practical problems become significantly harder to manage when the number of objectives gets larger. We, however, illustrate the proposed framework in the context of automatic language identification (LID) of 12 languages and 3 dialects. This LID task requires the simultaneous minimization of the false-acceptance and false-rejection rates for each of the 15 languages/dialects, and, hence, is an MOP problem with a total of 30 competing objectives. In our experiments, we observed that the ICO-trained classifiers result in not only reduced error rates but also a good balance among the many competing objectives when compared to those classifiers that minimize an overall objective. We interpret our experimental findings as evidence for ICO offering a greater degree of freedom for classifier design. Index Terms— pattern recognition, applications of multi- objective programming, automatic language identification. I. I NTRODUCTION I T has been increasingly recognized that realistic problems often involve the consideration of a tradeoff among many conflicting goals. Traditional machine learning algorithms aim at satisfying multiple objectives by combining the objectives into a global cost function, which in most cases overlooks the underlying tradeoffs between the conflicting objectives. Such single-objective programming (SOP) approaches promise that the chosen overall objective function is optimized over the training samples. At first place, it is often not easy to combine all the competing criteria in a single overall objective function. Furthermore, there is no guarantee on the performance of the individual metrics for they are not considered separately. For these reasons, methods of traditional single objective optimization are not enough. The multi-objective programming (MOP) offers new hori- zons for solving problems with competing objectives [1]. The mathematical foundations for MOP were already laid by S. Yaman and C.-H. Lee are with the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, 30332-0250 (e- mail: syaman@ece.gatech.edu;chl@ece.gatech.edu). Pareto more than a hundred years ago in Economics, and numerous methods have been developed [2], [3]. However, MOP algorithms were not extensively studied in machine learning until mid 1990s. Three criteria in designing neural networks for system identification were studied in [4] to obtain a proper balance between accuracy and model complexity. For similar purposes, the squared error and the norm of the weight vectors in a neural network were minimized simultaneously in [5]. Evolutionary MOP of support vector machines (SVMs) was considered in [6] to minimize false-acceptance rate, false- rejection rate and the number of support vectors to reduce model complexity. These approaches are illustrated with only two or three competing objectives, and what happens with more objectives remains to be explored. In [7], the authors developed a family of SVMs using goal programming (GP). GP is a branch of MOP where deviations of objectives from some pre-selected target levels are minimized. As a shortcom- ing, the selection of target levels for the objectives requires prior knowledge about the problem. In [8], the generation of neural networks based on the receiver operating characteristics (ROC) analysis was investigated using an evolutionary algo- rithm. Evolutionary algorithms are meta-heuristic optimization algorithms, which often lack mathematical analysis. In this paper, we develop an analytical MOP framework, called Iterative Constrained Optimization (ICO), for finding best compromise solutions among as many as 30 competing performance criteria. Our approach is inspired by the following facts: We require each objective function to attain a satisfactory level, We want to have the flexibility to achieve different levels of tradeoff, It is hard to determine a realistic overall objective func- tion a priori, One objective tends to dominate in an SOP problem even when an overall objective function is realistically determined. MOP methods mainly fall into two major categories, where the original MOP problems are converted into SOP problems either by aggregating the objective functions into an overall ob- jective function or by reformulating the problem with proposer constraints [1]. In ICO, each one of the objectives is iteratively optimized one after another with constraints on others, and the constraint bounds are adjusted by using the objective functions attained in the most recent iterate. It then becomes possible to tradeoff the performance in already-good objectives to improve the remaining not-so-good objectives. By doing so, a better balance among many competing objectives can be achieved