Robust Recognition against Illumination Variations Based on SIFT Farzan Nowruzi, Mohammad Ali Balafar, and Saeid Pashazadeh Department of Information Technology Faculty of Electrical and Computer Engineering University of Tabriz, Tabriz, East Azerbaijan, Iran farzan87@gmail.com, {balafarila,pashazadeh}@tabrizu.ac.ir Abstract. Feature matching is one of the basic approaches to many of computer vision applications, such as object recognition. Dealing with illumination variations is an open problem in this field. In this paper we present an approach to make a more robust algorithm against real world illumination changes and variations in direction of the light source on our object of interest, by using a set of training images for sampling these variations from their SIFT keypoints. A comprehensive keypoint descriptor based on the variations of illumination in training data is acquired to have a high recognition rate against real 3D illumination changes. This large number of keypoints is simplified to achieve a smaller number of robust keypoints and significantly faster matching phase. Keywords: Object Recognition, SIFT, Keypoint Descriptor, Sampled Training. 1 Introduction Scale Invariant Feature Transform (SIFT) is a method that tries to match an image of an object, in a real-time performance by extracting its distinctive in- variant features and comparing them with the queried image’s features. SIFT was first introduced and then expanded by Lowe [12][3]. Using features to repre- sent images of objects is an important objective in the field of data compression and mobile applications. SIFT transforms an image of object to a collection of local feature vectors (SIFT keypoint descriptors) which they are invariant to scaling, rotation and translation of image and are partially invariant against affine and orientation and also, they are tolerant against image noise. In addition, these features are unaffected by nearby clutter or partial occlusion. It tries to find potential key- points by smoothing and down sampling the input image and subtracting ad- jacent levels to create a pyramid of Difference-of-Gaussian. Then, it looks for minima and maxima points in scale space. After finding stable keypoints, the gradient orientation histogram is computed to find dominant orientations. Once C.-Y. Su, S. Rakheja, H. Liu (Eds.): ICIRA 2012, Part III, LNAI 7508, pp. 503–511, 2012. c Springer-Verlag Berlin Heidelberg 2012