International Journal of Computer Trends and Technology (IJCTT) Volume 21 Number 3 Mar 2015 ISSN: 2231-2803 http://www.ijcttjournal.org Page 146 Empirical Evaluation of Classifiers’ Performance Using Data Mining Algorithm Sanjay Kumar Sen 1 , Dr. Sujata Dash 2 1 Asst. Professor, Orissa Engineering College, Bhubaneswar, Odisha India. 2 Reader, PG Dept. Of Computer Application North Orissa University, Baripada, Odisha India. Abstract The field of data mining and knowledge discovery in databases (KDD) has been growing in leaps and bounds, and has shown great potential for the future[10]. Data classification is an important task in KDD (knowledge discovery in databases) process. It has several potential applications. The performance of a classifier is strongly dependent on the learning algorithm. In this paper, we describe our experiment on data classification considering several classification models. We tabulate the experimental results and present a comparative analysis thereof. Key word- Knowledge discovery in databases, classifier, data classification. Introduction WEKA Tool We use WEKA (www.cs.waikato.ac.nz/ml/weka/), an open source data mining tool for our experiment. WEKA is developed by the University of Waikato in New Zealand that implements data mining algorithms using the JAVA language. WEKA is a state-of-the-art tool for developing machine learning (ML) techniques and their application to real-world data mining problems. It is a collection of machine learning algorithms for data mining tasks. The algorithms are applied directly to a dataset. WEKA implements algorithms for data pre- processing, feature reduction, classification, regression, clustering, and association rules. It also includes visualization tools. The new machine learning algorithms can be used with it and existing algorithms can also be extended with this tool. Classifier Selection We select five commonly used classifiers for prediction classification in our work based on their qualitative performance. These classifiers are described in this section and their WEKA names are given in Table-3.1. K-Nearest Neighbour: This classifier is considered as a statistical learning algorithm and it is extremely simple to implement and leaves itself open to a wide variety of variations. In brief, the training portion of nearest-neighbour does little more than store the data points presented to it. When asked to make a prediction about an unknown point, the nearest-neighbour classifier finds the closest training-point to the unknown point and predicts the category of that training point according to some distance metric. The distance metric used in nearest neighbour methods for numerical attributes can be simple Euclidean distance. Decision Tree: A decision tree partitions the input space of a dataset into mutually exclusive regions, each of which is assigned a label, a value or an action to characterize its data points. The decision tree mechanism is transparent and we can follow a tree structure easily to see how the decision is made. A decision tree is a tree structure consisting of internal and external nodes connected by branches. An internal node is a decision making unit that evaluates a decision function to determine which child node to visit next. The external node, on the other hand, has no child nodes and is associated with a label or value that