Graphics and
Image Processing
J. D. Foley
Editor
Random Sample
Consensus: A
Paradigm for Model
Fitting with
Apphcatlons to Image
Analysis and
Automated
Cartography
Martin A. Fischler and Robert C. Bolles
SRI International
A new paradigm, Random Sample Consensus
(RANSAC), for fitting a model to experimental data is
introduced. RANSAC is capable of interpreting/
smoothing data containing a significant percentage of
gross errors, and is thus ideally suited for applications
in automated image analysis where interpretation is
based on the data provided by error-prone feature
detectors. A major portion of this paper describes the
application of RANSAC to the Location Determination
Problem (LDP): Given an image depicting a set of
landmarks with known locations, determine that point
in space from which the image was obtained. In
response to a RANSAC requirement, new results are
derived on the minimum number of landmarks needed
to obtain a solution, and algorithms are presented for
computing these minimum-landmark solutions in closed
form. These results provide the basis for an automatic
system that can solve the LDP under difficult viewing
Permission to copy without fee all or part of this material is
granted provided that the copies are not made or distributed for direct
commercial advantage, the ACM copyright notice and the title of the
publication and its date appear, and notice is given that copying is by
permission of the Association for Computing Machinery. To copy
otherwise, or to republish, requires a fee and/or specific permission.
The work reported herein was supported by the Defense Advanced
Research Projects Agency under Contract Nos. DAAG29-76-C-0057
and MDA903-79-C-0588.
Authors' Present Address: Martin A. Fischler and Robert C.
Bolles, Artificial Intelligence Center, SRI International, Menlo Park
CA 94025.
© 1981 ACM 0001-0782/81/0600-0381 $00.75
381
and analysis conditions. Implementation details and
computational examples are also presented.
Key Words and Phrases: model fitting, scene
analysis, camera calibration, image matching, location
determination, automated cartography.
CR Categories: 3.60, 3.61, 3.71, 5.0, 8.1, 8.2
I. Introduction
We introduce a new paradigm, Random Sample
Consensus (RANSAC), for fitting a model to experimental
data; and illustrate its use in scene analysis and auto-
mated cartography. The application discussed, the loca-
tion determination problem (LDP), is treated at a level
beyond that of a mere example of the use of the RANSAC
paradigm; new basic findings concerning the conditions
under which the LDP can be solved are presented and
a comprehensive approach to the solution of this problem
that we anticipate will have near-term practical appli-
cations is described.
To a large extent, scene analysis (and, in fact, science
in general) is concerned with the interpretation of sensed
data in terms of a set of predefmed models. Conceptually,
interpretation involves two distinct activities: First, there
is the problem of finding the best match between the
data and one of the available models (the classification
problem); Second, there is the problem of computing the
best values for the free parameters of the selected model
(the parameter estimation problem). In practice, these
two problems are not independent--a solution to the
parameter estimation problem is often required to solve
the classification problem.
Classical techniques for parameter estimation, such
as least squares, optimize (according to a specified ob-
jective function) the fit of a functional description
(model) to all of the presented data. These techniques
have no internal mechanisms for detecting and rejecting
gross errors. They are averaging techniques that rely on
the assumption (the smoothing assumption) that the
maximum expected deviation of any datum from the
assumed model is a direct function of the size of the data
set, and thus regardless of the size of the data set, there
will always be enough good values to smooth out any
gross deviations.
In many practical parameter estimation problems the
smoothing assumption does not hold; i.e., the data con-
tain uncompensated gross errors. To deal with this situ-
ation, several heuristics have been proposed. The tech-
nique usually employed is some variation of first using
all the data to derive the model parameters, then locating
the datum that is farthest from agreement with the
instantiated model, assuming that it is a gross error,
deleting it, and iterating this process until either the
maximum deviation is less then some preset threshold or
until there is no longer sufficient data to proceed.
It can easily be shown that a single gross error
("poisoned point"), mixed in with a set of good data, can
Communications June 1981
of Volume 24
the ACM Number 6