Abstract: Rough set theory (RST) is a relatively new mathemati- cal theory used in, discovery of data dependencies, evaluation of significance of attributes and objects, reduction of data and mean- ingful rules generation from large databases. In this paper, a rough set approach is used for generation of reduct and classification rules. Attribute reduction is an important process of knowledge discovery. This paper proposes a hybridized attribute reduction algorithm which deals with inconsistent data, based on the con- cept of attribute frequency in the binary discernibility matrix. The information system is checked for inconsistencies and then simpli- fied using Inconsistency Removal algorithm for finding equiva- lence classes. The simplified decision table is used for computing approximate reduct and based on it; rules are extracted from the database. The results are explained with the help of an example. MATLAB based simulation results are shown for various data- bases of UCI Machine Repository. In addition, rough set reduct generation accuracy is verified by RSES software. The study showed that the rough set theory is a useful tool for inductive learning and a valuable aid for building expert system mimicking human being. Keywords: Rough set theory; binary discernibility matrix; reduct; inconsistent decision table; rules; classification INTRODUCTION Automated knowledge discovery is the need of an hour, as the volume of data is growing exponentially, making man- ual analysis extremely difficult task. Rough set theory (RST), developed by Z. Pawlak [1-4], is a powerful soft computing tool for extracting meaningful patterns from vague, imprecise, inconsistent and large chunks of data. It has been successfully applied in the fields of expert sys- tems, machine learning, pattern classification, artificial intelligence, knowledge discovery in databases. RST, is a new area of uncertain mathematics, having a close relation- ship with fuzzy set theory. Rough sets and fuzzy sets are complementary generalization of classical sets. However, rough set analysis requires no external parameters and uses only the information presented in the given data, in contrast to other intelligent methods such as fuzzy set theory, the DempsterShafer theory or statistical methods. It classifies the given knowledge base approximately into suitable deci- sion classes by removing irrelevant and redundant data using attribute reduction algorithm. A database is character- ized by lots of superfluous attributes which are not required for rule discovery. These redundant attributes have to be removed so as to improve time complexity as well as quali- ty of rules generated. Knowledge reduction plays an im- portant role in decision support, fault diagnosis and classi- fication problems. It is an important pre-processing step in data mining which aims at finding the core attribute, so that the search space is reduced, efficiency is enhanced and same classification ability is still achieved. Attribute reduc- tion is a hot topic for researchers as the problem of finding attribute reduct is NP hard. Literature survey[6-10] indi- cates variety of attribute reduction algorithms, based on the concept of discernibility matrix, positive region, entropy, genetic algorithm, hybridization of rough with fuzzy, neu- ral etc.. These reducts are used for generating meaningful rules which aids in classification. Discernibility matrix proposed by Polish famous math- ematician Skowron [5] is a powerful tool for attribute re- duction. It is simple and intuitive. It says objects belonging to different class are indiscernible if their features are same. A matrix is created for storing interclass feature difference. This paper presents current status of work in attribute re- duction using discernibility matrix. Also the major draw- back of discernibility based algorithm i.e. time and space complexity reduction is worked upon. This paper focuses on an attribute reduction and rules generation algorithm based on Skowron’s discernibility matrix concept. Section 2 presents rough set preliminaries, section 3 talks about reduct and rule generation algorithm with its time complexity. In section 4, the proposed algo- rithm is worked out for a simple example. In section 5, MATLAB simulation results are shown and section 6 con- cludes the paper. ROUGH SET PRELIMINARIES Rough sets theory was proposed by Z. Pawlak (1982). Rough set theory is an effective tool for mining determinis- tic rules from a database. The rough set philosophy [6-7] is founded on the assumption that with every object of the universe of discourse we associate some information i.e., knowledge is associated. The main motto of Rough Set theory is “Let the Data Speak for themselves”. Objects characterized by the same information are indiscernible (similar) in view of the available information about them. The indiscernibility relation generated in this way is the mathematical basis of rough set theory. Any set of all in- discernible (similar) objects is called an elementary set (neighbourhood), and forms a basic granule (atom) of knowledge about the universe. Any union of elementary sets is referred to as crisp (precise) set - otherwise the set is rough (imprecise, vague). Some of the Rough set related terms are presented below: ATTRIBUTE REDUCTION ALGORITHM FOR INCONSISTENT INFORMATION SYSTEM USING ROUGH SET THEORY Kanchan Shailendra Tiwari 1 , Ashwin G. Kothari (Guide) 2 , 1 E & TC Dept. MESCOE, Pune, India 2 Electronics Dept. V.N.I.T, Nagpur, India Kanchan.s.tiwari@gmail.com, agkothari72@redffmail.com