Multi-Objective Evolutionary Optimization of Probabilistic Neural Network Talitha Rubio, Tiantian Zhang, Michael Georgiopoulos and Assem Kaylani talitharubio@earthlink.net, zhangtt@knights.ucf.edu, michaelg@ucf.edu and akaylani@gmail.com Abstract In this paper the major principles to effectively design a parameter-less, multi-objective evolution- ary algorithm that optimizes a population of probabilistic neural network (PNN) classifier models are articulated; PNN is an example of an exemplar-based classifier. These design principles are extracted from experiences, discussed in this paper, which guided the creation of the parameter-less multi-objective evolutionary algorithm, named MO-EPNN (multi-objective evolutionary probabilistic neural network). Furthermore, these design principles are also corroborated by similar principles used for an earlier design of a parameter-less, multi-objective genetic algorithm used to optimize a population of ART (adaptive resonance theory) models, named MO-GART (multi-objective genetically optimized ART); the ART classifier model is another example of an exemplar-based classifier model. MO-EPNN’s performance is compared to other popular classifier models, such as SVM (Support Vector Machines) and CART (Clas- sification and Regression Trees), as well as to an alternate competitive method to genetically optimize the PNN. These comparisons indicate that MO-EPNN’s performance (generalization on unseen data and size) compares favorably to the aforementioned classifier models and to the alternate genetically optimized PNN approach. MO-EPPN’s good performance, and MO-GART’s earlier reported good per- formance, both of whose design relies on the same principles, gives credence to these design principles, delineated in this paper. Keywords: Exemplar Based Classifiers, Neural Networks, Multi-Objective Optimization, Multi-Objective Evolutionary Algorithms, Probabilistic Neural Network 1 Introduction Exemplar-based classifiers are pattern recognizers that encode their accumulated evidence with the use of exemplars. The exemplars in EBC’s (exemplar based classifiers) are formulated via clustering of training patterns associated with the same class label. In essence, these classifiers use exemplars to summarize training data belonging to the same class and then utilize a similarity or proximity measure to classify a previously unseen test pattern. Examples of EBC’s, in the domain of neural network classifiers, include ART classifiers (Fuzzy ARTMAP (FAM) [4], Ellipsoidal ARTMAP (EAM) [1], Gaussian ARTMAP (GAM) [29]), Radial Basis Function Neural Networks (RBFNNs) [17], and PNNs [23]. Other examples of EBC’s are the K-Nearest Neighbor [8], and the Parzen Window Classifier [19]. The Probabilistic Neural Network (PNN) [23] is a famous EBC introduced by Specht in the 1990’s, which is in essence a Bayesian classifier that estimates the class conditional probabilities, using Parzen’s approach. The unique and obvious advantages of PNNs are [25]: 1) training is trivial and instantaneous, much faster than backpropagation; 2) the inherently parallel network structure is flexible; 3) the decision surfaces guarantee to approach Bayes-optimal; 4) the complexity of the shape of the decision boundary varies based on the choice of smoothing parameters; 5)it works for sparse samples, and etc. Therefore, PNN has been widely used in solving pattern recognition and classification problems (e.g. [28], [22], [11], [5], [9]). There is an outstanding issue related to PNN network construction: the selection of representative nodes and the corresponding smoothing parameters. Typically, all the training points are chosen to be 1