New Error Measures to Evaluate Features on Three-Dimensional Scenes Fabio Bellavia and Domenico Tegolo Department of Mathematics and Computer Science, University of Palermo, 90123, Palermo, Italy {fbellavia,domenico.tegolo}@unipa.it Abstract. In this paper new error measures to evaluate image features in 3D scenes are proposed and reviewed. The proposed error measures are designed to take into account feature shapes, and ground truth data can be easily estimated. As other approaches, they are not error-free and a quantitative evaluation is given according to the number of wrong matches and mismatches in order to assess their validity. Keywords: Feature detector, feature descriptor, feature matching, fea- ture comparison, overlap error, epipolar geometry. 1 Introduction Feature-based computer vision applications have been widely used in the last decade [12]. Their spread has increased the focus on feature detectors [8] and feature descriptors [7], as well as sparse matching algorithms [3,13]. Besides, diļ¬erent evaluation strategies to assess their properties have been proposed in [8,7,10,5,4]. The repeatability index introduced in [11] and the matching score [8] are common measures used for comparison. They have been adopted in well-known extensive comparisons for detectors [8], while precision-recall curves have been used for descriptors [7]. Both the error measures described above have been applied to the Oxford dataset [9] which has become a standard de facto. The principal drawback of these approaches is to require a priori knowledge of all the possible correct matches between corresponding points in images. In the case of the planar scenes the Oxford dataset is made of, this can be trivially obtained by computing the planar homography from an exiguous number of hand-taken correspondences [6]. However, the use of features on 3D scenes is the most attractive and interest- ing topic for which nowadays new features are designed, so a relevant interest has risen in order to understand how they behave and their properties in a fully 3D environment. A strategy to overcome this issue was proposed in [5], where only two further image sequences, which contain fully 3D objects, are added to to extend the Oxford dataset. The trifocal tensor [6] is computed by an interme- diate image and ground truth matches are recovered by using a dense matching