International Journal of Engineering and Technical Research (IJETR) ISSN: 2321-0869, Volume-2, Issue-9, September 2014 207 www.erpublication.org AbstractMany researchers have investigated the technique of combining the predictions of multiple classifiers to produce a single classifier. The resulting classifiers is more accurate than any individual classifier. This paper investigates the ability of ensemble methods to improve the efficiency of basic J48 machine learning algorithm. Ensemble algorithms like Bagging, Boosting and Blending improved the discrimination between sonar signals bounced off a roughly cylindrical rock in the SONAR dataset. The ranking and standard deviation functionalities provided by the WEKA experimenter helps to determine the effectiveness of a classifier model. Index TermsWEKA,SONAR,Bagging,Boosting,Blending. I. INTRODUCTION Decision tree is one of the classifying and predicting data mining techniques, belonging to inductive learning and supervised knowledge mining. It can generate easy-to-interpret If-Then decision rule, it has become the most widely applied technique among numerous classification methods [1]. Decision tree is a tree diagram based method, the node on the top of its tree structure is a root node and nodes in the bottom are leaf nodes. Target class attribute is given to each leaf node. From root node to every leaf node, there is a path made of multiple internal nodes with attributes. This path generates rule required for classifying unknown data. Moreover, most of decision tree algorithms contain two-stage task, i.e., tree building and tree pruning. In tree building stage, a decision tree algorithm can use its unique approach (function) to select the best attribute, so as to split training data set. The final situation of this stage will be that data contained in the split training subset belong to only one certain target class. Recursion and repetition upon attribute selecting and set splitting will fulfill the construction of decision tree root node and internal nodes. On the other hand, some special data in training data set may lead to improper branch on decision tree structure, which is called over-fitting. Therefore, after building a decision tree, it has to be pruned to remove improper branches, so as to enhance decision tree model accuracy in predicting new data . Among developed decision tree algorithms, the commonly used ones Manuscript received September 18, 2014. Aakash Tiwari, Computer Engineering FRCRCE, Bandra, Mumbai University, India Aditya Prakash, I.T Engineering, FRCRCE, Bandra, Mumbai University, India include ID3 [2], C4.5 [3], CART [4] and CHAID [5]. C4.5 was developed from ID3 (Iterative Dichotomiser 3) algorithm, it uses information theory and inductive learning method to construct decision tree. C4.5 improves ID3, which cannot process continuous numeric problem. J48 is an open source Java implementation of the C4.5 algorithm in the WEKA data mining tool. II. ENSEMBLE METHODS 1. BOOSTING - Boosting is an ensemble method that starts out with a base classifier that is prepared on the training data. A second classifier is then created behind it to focus on the instances in the training data that the first classifier got wrong. The process continues to add classifiers until a limit is reached in the number of models or accuracy. 2. BAGGING - Bagging (Bootstrap Aggregating) is an ensemble method that creates separate samples of the training dataset and creates a classifier for each sample. The results of these multiple classifiers are then combined (such as averaged or majority voting). The trick is that each sample of the training dataset is different, giving each classifier that is trained, a subtly different focus and perspective on the problem. 3. BLENDING - Blending is an ensemble method where multiple different algorithms are prepared on the training data and a meta classifier is prepared that learns how to take the predictions of each classifier and make accurate predictions on unseen data.. III. WEKA INBUILT ENSEMBLES A. Boosting ADABOOST M1 is class for boosting a nominal class classifier using the Adaboost M1[6] method. Only nominal class problems can be tackled. Often dramatically improves performance, but sometimes overfits. AdaBoost M1 is adaptive in the sense that subsequent weak learners are tweaked in favor of those instances misclassified by previous classifiers. Path - weka.classifiers.meta.AdaBoostM1 STEPS: 1.Click “Add new…” in the “Algorithms” section. 2.Click the “Choose” button. 3.Click “AdaBoostM1” under the “meta” selection. Improving classification of J48 algorithm using bagging,boosting and blending ensemble methods on SONAR dataset using WEKA Aakash Tiwari, Aditya Prakash