International Journal of Computer Engineering and Information Technology VOL. 8, NO. 6, June 2016, 100105 Available online at: www.ijceit.org E-ISSN 2412-8856 (Online) A Comparison of FAST, SURF, Eigen, Harris, and MSER Features Engr.Farman Ali 1 , Engr.Sajid Ullah Khan 2 , Engr.Muhammad Zarrar Mahmudi 3 , Rahmat Ullah 4 1, 2, 3, 4 Sarhad University Peshawar 1 farman_puhtun@yahoo.com, 2 engr.sajid10@gmail.com, 3 engr.zarrar@gmail.com, 4 rahmat9314@gmail.com ABSTRACT Precise, successful in desire target, strong healthy and self loading image registration is critical task in the field of computer vision. The most require key steps of image alignment/ registration are: Feature matching, Feature detection, , derivation of transformation function based on corresponding features in images and reconstruction of images based on derived transformation function. This is also the aim of computer vision in many applications to achieve an optimal and accurate image, which depends on optimal features matching and detection. The investigation of this paper summarize the coincidence among five different methods for robust features/interest points (or landmarks) detector and indentify images which are (FAST), Speed Up Robust Features (SURF), (Eigen),( Harris) & Maximally Stable Extremal Regions ( MSER). This paper also focuses on the unique extraction from the images which can be used to perform good matching on different views of the images/objects/scenes. Keywords: Feature detection, Feature matching, FAST, SURF, EIGEN, HARIS and MSER. 1. INTRODUCTION AND PRIOR WORK Several research work done in the computer vision on the basis of features detection. Which are valuable parts of computer vision. Bay and Tuytelaars (2006) speeded up robust features and used integral images for image convolutions and Fast-Hessian detector. Their experiments turned out that it was faster and it works well [2]. Lowe (2004) presented SIFT for extracting distinctive invariant features from images that can be invariant to imagebscale and rotation. Then it was widely used in image mosaic, recognition, retrieval and etc [3]. Bay and Tuytelaars (2006) speeded up robust features and used integral images for image convolutions and Fast-Hessian detector. Their experiments turned out that it was faster and it works well [4]. Image matching task to finding correspondences between two images of the same scene/object is part of manycomputer vision applications. Image registration, camera calibration and object recognize just few. This paper describes distinctive features from images is divided into two main phases. First, “key points” are extracted from distinctive locations from the images such as edges, blobs, corner etc. Key point detectors should be highly repeatable. Next, neighbourhood regions are picked around every key point and distinctive feature descriptors are computed from each region [1]. For image matching, extraction features in images which can provide reliable matching between different viewpoints of the same image. During process, Feature descriptors are extracted from sample images and stored. This descriptor has to be distinctive and, at the same time, robust to noise, detection errors. Finally, the feature descriptors are matched between different images. Feature descriptor matching can be based on distances such as Euclidean. This paper discusses the overview of the methods in Section 2, in section 3 we can see the experimental results while Section 4 tells the conclusions of the paper. 2. OVERVIEW OF METHODS 2.1 SIFT Algorithm Overview SIFT (Scale Invariant Feature Transform) algorithm proposed by Lowe in 2004 [6] to solve the image rotation, scaling, and affine deformation, viewpoint change, noise, illumination changes, also has strong robustness. The SIFT algorithm has four main steps: (1) Scale Space Extrema Detection, (2) Key point Localization, (3) Orientation Assignment and (4) Description Generation. The first stage is to identify location and scales of key points using scale space extrema in the DoG (Difference-ofGaussian) functions with different values of σ, the DoG function is convolved of image in scale space separated by a constant factor k as in the following equation. D(x, y,) = (G(x, y, k) G(x, y,) × I(x, y) …. (1) Where, G is the Gaussian function and I is the image. Now the Gaussian images are subtracted to produce a DoG, after that the Gaussian image subsample by factor 2 and produce DoG for sampled image. A pixel compared of