Contents lists available at ScienceDirect Engineering Applications of Articial Intelligence journal homepage: www.elsevier.com/locate/engappai Prototype selection to improve monotonic nearest neighbor José-Ramón Cano a, , Naif R. Aljohani b , Rabeeh Ayaz Abbasi b , Jalal S. Alowidbi c , Salvador García d a Department of Computer Science, EPS of Linares, University of Jaén, Campus Cientíco Tecnológico de Linares, Cinturón Sur S/N, Linares 23700, Jaén, Spain b Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia c Faculty of Computing and Information Technology, University of Jeddah, Jeddah, Saudi Arabia d Department of Computer Science and Articial Intelligence, CITIC-UGR (Research Center on Information and Communications Technology), University of Granada, ETSII, Calle Periodista Daniel Saucedo Aranda S/N, Granada 18071, Spain ARTICLE INFO Keywords: Monotonic classication Prototype selection Monotone nearest neighbor Data reduction Opinion surveys ABSTRACT Student surveys occupy a central place in the evaluation of courses at teaching institutions. At the end of each course, students are requested to evaluate various aspects such as activities, methodology, coordination or resources used. In addition, a nal qualication is given to summarize the quality of the course. The prediction of this nal qualication can be accomplished by using monotonic classication techniques. The outcome oered by these surveys is particularly signicant for faculty and teaching staassociated with the course. The monotonic nearest neighbor classier is one of the most relevant algorithms in monotonic classication. However, it does suer from two drawbacks, (a) inecient execution time in classication and (b) sensitivity to no monotonic examples. Prototype selection is a data reduction process for classication based on nearest neighbor that can be used to alleviate these problems. This paper proposes a prototype selection algorithm called Monotonic Iterative Prototype Selection (MONIPS) algorithm. Our objective is two-fold. The rst one is to introduce MONIPS as a method for obtaining monotonic solutions. MONIPS has proved to be competitive with classical prototype selection solutions adapted to monotonic domain. Besides, to further demonstrate the good performance of MONIPS in the context of a student survey about taught courses. 1. Introduction Classication refers to the problem of predicting the value of a target variable by building a model based on relevant independent input variables Witten et al. (2011). In monotonic classication, the data come from ordered domains Potharst et al. (2009); Gutiérrez et al. (2016); thus, the variable domain is ordered, assuming that the target variable is dened as a monotone function of the describing indepen- dent input variables. In addition, it is necessary that the predictions satisfy the monotonicity as it is indicated in Kotlowski and Slowinski (2013), Gutiérrez et al. (2013), Sánchez-Monedero et al. (2014), Gutiérrez and García (2016). The evaluation of teaching courses based on surveys gathered from studentsopinions can be categorized as a monotonic classication problem if it intends to predict a nal qualication that summarizes the general quality of the course. The students are asked to evaluate each course according to several aspects related to interest, achieving appropriate class participation, teaching resources, capabilities of the teacher, etc. The Monotonic Nearest Neighbor classier (MNN) is one of the most relevant algorithms solving monotonic classication (Duivesteijn and Feelders, 2008). MNN is a nonparametric classier which uses the entire input data set to establish the monotonic classication rule. Thus, the eectiveness of the classication process performed by MNN depends strongly on the quality of the training data (as in the case of the classical nearest neighbor classication algorithm) (Derrac et al., 2014). The main drawback of MNN is its inecient execution time making a prediction and low noise tolerance (García et al., 2012). Amongst the most eective techniques for addressing these problems are those that work by preprocessing the data (Cano et al., 2003; García et al., 2015), instead of modifying the computation of the NN rule (MNN rule in this case). Within data preprocessing, data reduction is widely used. By removing irrelevant data, data reduction can avoid the excessive storage, reducing the execution time of the algorithms, easing and enabling classication techniques to deal with noisy data sets (Cano et al., 2008; García et al., 2008). One of the data reduction techniques http://dx.doi.org/10.1016/j.engappai.2017.02.006 Received 16 May 2016; Received in revised form 30 December 2016; Accepted 9 February 2017 Corresponding author. E-mail addresses: jrcano@ujaen.es (J.-R. Cano), nraljohanig@kau.edu.sa (N.R. Aljohani), frabbasi@kau.edu.sa (R.A. Abbasi), jalowibdi@uj.edu.sa (J.S. Alowidbi), salvagl@decsai.ugr.es (S. García). Engineering Applications of Artificial Intelligence 60 (2017) 128–135 0952-1976/ © 2017 Elsevier Ltd. All rights reserved. MARK