2178 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 4, APRIL 2012
Fast 2-D Distance Transformations
Stelios Krinidis
Abstract—The performance of a number of image processing
methods depends on the output quality of a distance transforma-
tion (DT) process. Most of the fast DT methodologies are not ac-
curate, whereas other error-free DT algorithms are not very fast.
In this paper, a novel, fast, simple, and error-free DT algorithm
is presented. By recording the relative - and -coordinates of
the examined image pixels, an optimal algorithm can be devel-
oped to achieve the DT of an image correctly and efficiently in
constant time without any iteration. Furthermore, the proposed
method is general since it can be used by any kind of distance func-
tion, leading to accurate image DTs.
Index Terms—Distance transformation (DT), Euclidean dis-
tance, Euclidean DT, image processing, object representation.
I. INTRODUCTION
D
ISTANCE transformation (DT) [1] is an operation that
converts a digital binary image, consisting of feature
(object) and nonfeature (background) elements, to a map
(another image) where each image pixel has a floating value
corresponding to the minimum distance from the background
provided by a distance function.
The DT methodologies can be divided into two main cate-
gories, according to the achieved accuracy. In the first category,
approximation DT techniques were presented by Danielsson
[2], Borgefors [3]–[5], and Ragnemalm [6], [7]. These algo-
rithms get a distance map that is accurate on most of the target
points, but they can also produce small errors with some con-
figurations of the object pixels. While these approximations are
good enough for many applications, in most of the cases, an
error-free DT is needed [8].
The second category is consisted by algorithms that provide
error-free DT maps. These algorithms can be further divided
into three classes, according to the order used to scan the pixels.
Parallel algorithms that were presented by Yamada [9],
Shih and Mitchell [10], Huang and Mitchell [11], and Em-
brechts and Roose [12] are efficient in a cellular array com-
puter since all the pixels at each iteration can be processed
in parallel. However, these methods cannot be efficiently
implemented on a conventional computer.
Raster scanning algorithms were proposed by Mullikin
[13], Saito and Toriwaki [14], Breu et al. [15], Guan
and Ma [16], Maurer et al. [17], Shih and Wu [18], or
Felzenszwalb and Huttenlocher [19].
Manuscript received August 23, 2010; revised March 12, 2011, June 30,
2011, and October 07, 2011; accepted November 05, 2011. Date of publication
November 16, 2011; date of current version March 21, 2012. The associate ed-
itor coordinating the review of this manuscript and approving it for publication
was Prof. Jenq-Neng Hwang.
The author is with the Department of Information Management, Technolog-
ical Institute of Kavala, 65404 Kavala, Greece (e-mail: stelios.krinidis@my-
cosmos.gr).
Digital Object Identifier 10.1109/TIP.2011.2176343
Propagation or contour-processing methods were intro-
duced by Vincent [20], Ragnemalm [6], Eggers [21], and
Cuisenaire and Macq [8], [22].
In propagation algorithms, information is transmitted from each
image pixel to its neighbors, starting from the contour of the ob-
ject and using a dynamic list to store the pixels in the propaga-
tion front. For a Euclidean distance transform, the information
that is propagated is usually a vector pointing to the nearest ob-
ject pixel. As shown by Eggers [23], this can be considered as an
efficient implementation of the parallel algorithms of Yamada or
Mitchell on conventional computers.
Saito and Toriwaki [14] presented an algorithm for com-
puting the exact Euclidean DT based on dimensionality
reduction. This method and the propagation algorithms are
very fast and exact Euclidean DT methods for conventional
computers. Nevertheless, their computational cost is highly
image dependent and requires time for some input
images [8], [17].
Breu et al. [15] introduced a method that also exploits the
idea of dimensionality reduction in order to compute a feature
map of an image in time by constructing the intersection
of the Voronoi diagram, whose sites are the object pixels with
each row of the image. Then, the Euclidean DT is computed
from the feature map.
Guan and Ma [16] improved the computational performance
of Breu’s approach by exploiting the fact that neighboring pixels
tend to have the same closest object pixel. Thus, they propagate
the closest object pixel information in the form of segment lists
rather than individual pixels. Later, Maurer et al. [17] also im-
proved Breu’s algorithm by taking advantage of the ideas in the
method of Guan and Ma [16]. However, their main advantage
is that they compute the Euclidean DT directly, rather than first
compute the feature map.
Shih and Wu [18] introduced an algorithm that can di-
rectly compute the Euclidean DT with only two image scans
exploiting the Borgefors masks [3]. However, this method pro-
duces small errors in some cases. These errors occurred when
the areas of the DT map are disconnected with the 4-direct (and
8-direct) neighborhood [8] due to the fact that digital images
are in discrete and not in continuous plane.
Felzenszwalb and Huttenlocher [19] provided a linear-time
algorithm for solving a class of minimization problems in-
volving a cost function with both local and spatial terms. These
problems can be viewed as a generalization of classical distance
transforms of binary images, where a binary image is replaced
by an arbitrary sampled function.
Although many techniques have been presented for obtaining
distance transform, most of them are not error free or suffi-
ciently fast, whereas others could not be applied on conventional
computers. Furthermore, other algorithms calculate only the Eu-
clidean DT and are insufficient to be applied with any kind of
distance function, whereas others are too complex to be imple-
mented and understood.
1057-7149/$26.00 © 2011 IEEE