ISSN(Online): 2320-9801 ISSN (Print): 2320-9798 International Journal of Innovative Research in Computer and Communication Engineering (An ISO 3297: 2007 Certified Organization) Vol. 3, Issue 8, August 2015 Copyright to IJIRCCE DOI: 10.15680/IJIRCCE.2015. 0308022 7286 Knowledge Fusion Technique Using Classifier Ensemble by Different Classification Rules J. B. Patil, Prof. V.S. Nandedkar ME Student, Dept. of Computer Engineering, Savitribai Phule Pune University, Pune, Maharashtra of India Assistant Professor, Dept. of Computer Engineering, Savitribai Phule Pune University, Pune, Maharashtra of India ABSTRACT: Classification rules are extracted from sample data known as knowledge. If we extract these knowledge in a distributed way, it is necessary to combine or fuse these rules. The task of data fusion is to identify the true values of data items among multiple observed values drawn from different sources of varying reliability. In data mining applications knowledge extraction is splitted into subtasks due to memory or run-time limitations. Again, locally extracted knowledge must be consolidated later because communication overhead should be low. Extracting information from multiple data sources, and reconciling the values so the true values can be stored in a central data repository. But it’s a problem of vital importance to the database and knowledge management communities. In a conventional approach extracting knowledge is typically done either by co mbining the classifiers’ outputs or by combining the sets of classification rules but in this paper, I introduce a new way of fusing classifiers at the level of parameters of classification rules. Here its focused around the utilization of probabilistic generative classifiers utilizing multinomial circulations and multivariate ordinary dispersions for the consistent ones. We are using these distributions as hyper distributions or second-order distributions. Fusing of these classifiers are can be done by multiplying the hyper-distributions of the parameters. KEYWORDS: Knowledge Engineering, Training, Classifier Fusion, Probabilistic classifier, Knowledge Fusion, Generative Classifier, Bayesian techniques. I. INTRODUCTION Classification is a data mining function that assigns items in a collection to target classes. Goal of classification is to accurately predict the target class for each data case. Classification tasks are begins with a data set in which the class assignments are known. The simplest type of classification problem is binary classification in which the target attribute has only two possible values. In the model, a classification algorithm finds relationships between the values of the target and values of the predictors. Also Different classification algorithms use different techniques for finding relationships. We summarize these relationships in a model and then applied to a different data set in which the class assignments are unknown. However, a more detailed analysis of current applied results does reveal some puzzling aspects of unlabeled data. Researchers have reported cases where the addition of unlabeled data degraded the performance of the classifiers when compared to the case in which unlabeled data is not used. These cases were not specific to one type of data, but for different kinds, such as sensory data , computer vision , and text classification .To explain the phenomenon, we began by performing extensive experiments providing empirical evidence that degradation of performance is directly related to incorrect modelling assumptions. Here we are going to estimate the parameters of a Naive Bayes classifier with 10 features using the Expectation-Maximization (EM) algorithm with varying numbers of labelled and unlabeled data. Section 2 briefly issues the related work. In section 3 we mainly introduce proposed system and framework. Module wise experimental results are shown in section 4.. And finally the conclusion is introduced in section 5.