Majlesi Journal of Multimedia Processing Vol. 4, No. 1, Ma rch 2015 39 Improved One-Class Problem using Support Vector Data Description Hoomankashanian 1 , Fatemeh Shobeiri 2 , Mohsen Parhizgar 3 , Elham Dehghan niri 4 , Saeid Reyhani 5 , Hamidreza Ghaffari 6 1- Department of Electronics and Computer Engineering, Islamic Azad University, Ferdows, Iran Email: kashanian@ferdowsiau.ac.ir 2,3,4,5,6- Artificial Intelligence MA Student, Islamic Azad University, Ferdows, Iran Email: fatima.shobeiri@gmail.com, en.parhizgar@gmail.com, dehghan.ne@gmail.com, s.reyhani52@yahoo.com Received: June 2014 Revised: June 2014 Accepted: July 2014 ABSTRACT: Nowadays the one-class classification used very extensively in the separation of a specific type of data to find its surroundings .One important way in this regard, is support vector data description (SVDD).SVDD uses only positive examples to learn a predictor whether an example is positive or negative. When a fraction of negative examples are available, the performance of SVDD is expected to be improved.SVDD-Neg in some cases, when the samples are negative, the SVDD worse. SVM normal, usually a large number of Support Vector arises, because all the training samples on the wrong side of the border to turn support vector.slider variables that were killed on the borders.However, the proposed method significantly reduces the number of support vectors, because only a small number of training samples on the wrong side of their border on the border to become a support vector killing. In this paper, a new algorithm "SVM-SVDD" is proposed, which also improved support vector machine to solve the problem described SVDD data samples have been negative.The experimental results illustrate that SVM-SVDD outperforms SVDD-neg on both training time and accuracy. KEYWORDS: Support Vector Machine, Support Vector Data Description, One-Class Problem 1. INTRODUCTION Binary classification is one of the most important problems in machine learning. In the binary classification problem, two classes of examples labeled +1 and -1 are provided in the training step. The task is to learn a decisive function to predict the label of one unseen example. There are some classification algorithms have been devel-oped such as SVM [1] and Boosting [2]. However, only one class examples are provided in some applications and no or only a few of examples from other classes. A decisive function is also required to judge whether one example comes from the given class or not. If one example is far different from the given class, we think it comes from non-given class with a high likelihood. Here, one example of the given class is called ―positive example‖ or ―target‖. And one of non-given class is called ―negative example‖ or ―outlier‖.This problem is usually called data description or one-class classification [3]. Data description problem is usually caused by that one class of examples can be collected conveniently while ones from non-given class are difficult to obtain. Data description problem happens frequently in real life, cannot be solved directly by binary classification algorithms. A typical application of data description is machine monitoring system. Assume that we describe measurements from machine under normal condition. When the machine works under normal condition, we can collect a lot of targets easily. On the other hand only when the machine goes out of order, the outlier can be available. So the data description problem is also called outlier detection. Scholkopf et al. [4] made some modifications on classical two-class SVM and proposed one-class SVM for data description problem. The idea of one- class SVM is to maximize the margin between the given class examples and origin in the feature space. Density Level Detection (DLD)[5] framework is proposed to find density level set to detect observations not belonging to the given class. According to DLD principle, a new modified SVM— DLD-SVM was developed to deal with one-class classification problem. The above algorithms are discriminative ones. On the other hand, data description problem can be taken as a traditional sample distribu-tion estimation problem. So the