International Journal of Computer Applications (0975 – 8887) Volume 46– No.11, May 2012 1 Performance Considerations in Implementing Offline Signature Verification System Charu Jain Priti Singh Aarti Chugh Department of CSE Department of ECE Department of CSE Amity University Haryana Amity University Haryana Amity University Haryana ABSTRACT Handwritten signatures are widely accepted as a means of document authentication, authorization and personal verification. In modern society where fraud is rampant, there is the need for an automatic Handwritten Signature Verification (HSV) system to complement visual verification. An implementation is a realization of a technical specification or algorithm as a program, software component, or other computer system through programming and deployment. Many approaches are possible to the implementation of a signature verification system [1, 2]. This paper highlights the key performance considerations when planning the implementation of a signature verification system. Keywords Handwritten signature verification (HSV), Feature Extraction, False Rejection Rate (FRR) 1. INTRODUCTION Handwritten signature verification has been extensively studied & implemented. For legality most documents like bank cheques, travel passports and academic certificates need to have authorized handwritten signatures. In general, handwritten signature verification can be categorized into two kinds – on–line verification and off–line verification. On–line verification requires a stylus and an electronic tablet connected to a computer to grab dynamic signature information .Off–line verification, on the other hand, deals with signature information which is in static format.In off– line signature recognition we are having the signature template coming from an imaging device, hence we have only static characteristic of the signatures. The person need not be present at the time of verification. Hence off-line signature verification is convenient in various situations like document verification, banking transactions etc. In the past decade a bunch of solutions has been introduced, to overcome the limitations of off-line signature verification [27] and to compensate for the loss of accuracy. Most of these methods have one in common: they deliver acceptable results but they have problems improving them. In the off-line case no definite matching exists. These methods can only operate on static image data; therefore they often try to compare global features like size of the signature or similarities of the contour [6]. To get a tractable abstraction of the two dimensional images, these methods often involve some image transformation, like the Hough or Radon transformations [8] or work on the density models of the signatures [11]. Although these methods almost totally ignore the semantic information hidden in the signature, combined with each other they seem to give a good representation of the signature, allowing the researchers to reach Equal Error Rates (EER) between 10% and 15% [3]. The drawback of this methodology is that loosing the semantic information makes it almost impossible to improve the algorithm or to explain the results in detail. Jose L. Camino et al. take another approach [4] they try to guess the pen movements during the signing by starting at the left and bottom most line-end and then following it. There are also other approaches trying to reconstruct the signing process. In [15] stroke and sub-stroke properties are extracted and used as a basis for the comparison. Based on own experience, these latter approaches seem to be the most promising, because their results can be explained (and therefore improved) in a semantically meaningful way. There is also a wide variety of classifiers used to compare the results: Hidden Markov models [14], Support Vector Machines [7], multi-layer perceptions, genetic algorithms, and neural networks [5] are the most widely used solutions. Sabourin [19] used new approach granulometric size distributions for the definition of local shape descriptors in an attempt to characterize the amount of signal activity exciting each retina on the focus of a superimposed grid. He then used a nearest neighbor and threshold-based classifier to detect random forgeries. A total error rate of 0.02% and 1.0% was reported for the respective classifiers. A database of 800 genuine signatures from 20 writers is used Zhang [20] have proposed a Kernel Principal Component Self regression (KPCSR) model for off-line signature verification and recognition problems. Developed from the Kernel Principal Component Regression (KPCR), the self-regression model selected a subset of the principal components from the kernel space for the input variables to accurately characterize each person’s signature, thus offering good verification and recognition performance. He reported FRR 92% and FAR .5%. Baltzakis [21] developed a neural network-based system for the detection of random forgeries. The system uses global features, grid features (pixel densities), and texture features (co occurrence matrices) to represent each signature. For each one of these feature sets, a special two-stage perception one- class-one-network (OCON) classification structure is implemented. Justino [22] used a discrete observation HMM to detect random, casual, and skilled forgeries. A grid segmentation scheme was used to extract three features: a pixel density feature, a pixel distribution feature (extended-shadow-code), and an axial slant feature. Two data sets are used. After optimization first data set was used to detect random, casual, and skilled forgeries in a second data set. The second data set contains the signatures of 60 writers with 40 training signatures, 10 genuine test signatures, 10 casual forgeries, and 10 skilled forgeries per writer. An FRR of 2.83%and an FAR