David C. Wyld (Eds) : ICCSEA, SPPR, CSIA, WimoA, SCAI - 2013 pp. 431–439, 2013. © CS & IT-CSCP 2013 DOI : 10.5121/csit.2013.3545      Mandar S. Kulkarni, Shankar M. Venkatesan, M. Arunkumar Philips Research India, ManyataTech Park, Nagavara, Bangalore 560045, India {Mandar.Kulkarni, Shankar.Venkatesan, Arunkumar.M}@Philips.com ABSTRACT We propose two novel Tensor Voting (TV) based supervised binary classification algorithms for N-Dimensional (N-D) data points. (a) The first one finds an approximation to a separating hyper-surface that best separates the given two classes in N-D: this is done by finding a set of candidate decision-surface points (using the training data) and then modeling the decision surface by local planes using N-D TV; test points are then classified based on local plane equations. (b) The second algorithm defines a class similarity measure for a given test point t, which is the maximum over all inner products of the vector from t (to training point p) and the tangent at p (computed with TV): t is then assigned the class with the best similarity measure. Our approach is fast, local in nature and is equally valid for different kinds of decisions: we performed several experiments on real and synthetic data to validate our approach, and compared our approaches with standard classifiers such as kNN and Decision Trees. 1. INTRODUCTION Classification of data points based on training data is an important and highly researched problem. A number of approaches exist for both labeled (supervised) and unlabeled (unsupervised) training data [16] [11]. Where the data has high dimensions like object recognition and data mining, the class decision surfaces are usually non-linear – here methods like Support Vector Machines (SVM) [5][6][8][9] come to our help. Even SVM with its theoretical and practical advantages (e.g. testing complexity is proportional only to the number of support vectors as opposed to the large set used in k Nearest Neighbors (kNN)) cannot handle decision surfaces which are very highly complex, and many modifications of SVM and localized methods like Local SVM (LSVM) [2] have been attempted for that. Here we introduce two novel localized binary classification approaches based on the theory of Tensor Voting (TV) [7][1][13][14][15] that can handle complex decision surfaces (with a simple scale parameter), and evaluate them experimentally. In what follows, we propose two TV based binary classification approaches (TVBC 1 and TVBC 2), which can deal with both linear as well as non-linear decision boundaries. The first (TVBC 1) approach (Section 2) uses a greedy implementation of Euclidean bipartite (induced by the two classes) minimum cost matching to identify potential points on the decision boundary, prunes these points, and smoothes this boundary with local planes with a call to TV: then during the test