Natarajan Meghanathan, et al. (Eds): SIPM, FCST, ITCA, WSE, ACSIT, CS & IT 06, pp. 349–356, 2012.
© CS & IT-CSCP 2012 DOI : 10.5121/csit.2012.2334
Sachin Bhandari
1
and Dr. Aruna Tiwari
2
1
Department of Computer Engineering, SGSITS, Indore, India
er.bhandari04@gmail.com
2
Department of Computer Engineering, SGSITS, Indore, India
atiwari@sgsits.ac.in
ABSTRACT
In this paper, Design and Implementation of Binary Neural Network Learning with Fuzzy
Clustering (DIBNNFC), is proposed to classify semisupervised data, it is based on the
concept of binary neural network and geometrical expansion. Parameters are updated
according to the geometrical location of the training samples in the input space, and each
sample in the training set is learned only once. It’s a semisupervised based approach, the
training samples are semi-labelled i.e. for some samples, labels are known and for some
samples data labels are not known. The method starts with classification, which is done by
using the concept of ETL algorithm. In classification process various classes are formed.
These classes classify samples in to two classes after that considers each class as a region
and calculates the average of the entire region separately. This average is centres of the
region which is used for the purpose of clustering by using FCM algorithm. Once clustering
process over labelling of semi supervised data is done, then whole samples would be classify
by (DIBNNFC). The method proposes here is exhaustively tested with different benchmark
datasets and it is found that, on increasing value of training parameters number of hidden
neurons and training time both are getting decrease. The result reported, using real character
recognition data set and result will compare with existing semi-supervised classifier, the
proposed approach learned with semi-supervised leads to higher classification accuracy.
KEYWORDS
Semisupervised classification, Geometrical Expansion, Binary Neural Network, Fuzzy C-
means algorithm, ETL algorithm.
1. INTRODUCTION
Recently, the back propagation learning (BPL) algorithm has been applied to many binary-to-
binary mapping problems [6], [2]. However, since the BPL algorithm searches the solution in
continuous space, the BPL algorithm applied to binary-to-binary mapping problems results in
long training time and inefficient performance. Typically, the BLTA algorithm require an
extremely high number of iterations to obtain even a simple binary-to-binary mapping [3]. Also,
in the BLTA algorithm, the number of neurons in the hidden layer required to solve a given
problem is not known a priori. Since the numbers of neurons in the input and the output layer are
determined by the dimensions of the input and output vectors, respectively, the abilities of three-
layer neural networks depend on the number of neurons in the hidden layer. Therefore, one of the
most important problems in application of three-layer neural networks is to determine the
necessary number of neurons in the hidden layer. It has been widely recognized that Stone-
Weierstrass’s theorem does not give a practical guideline in determining the required number of