Hardware-Accelerated High-Quality Filtering on PC Hardware
Markus Hadwiger Thomas Theußl Helwig Hauser Eduard Gr¨ oller
VRVis Research Center Institute of Computer Graphics and Algorithms
Donau-City-Strasse 1, 1220 Vienna, Austria Vienna University of Technology
Karlsplatz 13/186, 1030 Vienna, Austria
Abstract
We describe a method for exploiting commodity
3D graphics hardware in order to achieve hardware-
accelerated high-quality filtering with arbitrary fil-
ter kernels. Our approach is based on reordering
the evaluation of the filter convolution sum to ac-
commodate the way the hardware works. We ex-
ploit multiple rendering passes together with the ca-
pability of current graphics hardware to index into
several textures at the same time (multi-texturing).
The method we present is applicable in one, two,
and three dimensions. The cases we have been most
interested in up to now are two-dimensional recon-
struction of object-aligned slices through volumet-
ric data, and three-dimensional reconstruction of ar-
bitrarily oriented slices. As a fundamental build-
ing block, the basic algorithm can be used in order
to directly render an entire volume by blending a
stack of slices reconstructed with high quality on
top of each other. However, it is important to em-
phasize that our approach has no fundamental re-
strictions with regard to the filters that can be em-
ployed. Thus, it could also be used for more general
filtering tasks than reconstruction, e.g., image pro-
cessing.
1 Introduction
A fundamental problem in computer graphics is
how to reconstruct images and volumes from sam-
pled data. The process of determining the origi-
nal continuous data – or at least a sufficiently ac-
curate approximation – from discrete input data is
usually called function or signal reconstruction. In
volume visualization, the input data is commonly
given at evenly spaced discrete locations in three-
Hadwiger,Hauser @VRVis.at, http://www.VRVis.at/vis/
theussl, groeller @cg.tuwien.ac.at, www.cg.tuwien.ac.at
space. In theory, the original volumetric data can be
reconstructed entirely, provided certain conditions
are honored (cf. sampling theorem [13]). In reality,
of course, reconstruction is always a trade-off be-
tween performance and quality. This is especially
true for hardware implementations. Reconstruction
in graphics hardware is usually done by using sim-
ple linear interpolation. This is fast, but introduces
significant reconstruction artifacts. On the other
hand, a lot of research in the last few years has
been devoted to improving reconstruction by using
high-order reconstruction filters [6, 10, 11, 12, 15].
Among the investigated filters are piecewise cubic
functions, as well as windowed ideal reconstruction
functions (windowed sinc filters). However, these
filters were usually deemed to be too slow to be used
in practice.
In this paper, we will show how to exploit con-
sumer 3D graphics hardware for accelerating high-
order reconstruction of volumetric data. The pre-
sented approach works in one, two, and three di-
mensions, respectively. Up to now, we have used
our method for reconstruction of images and slices
in two dimensions, and reconstruction of oblique
slices through volumetric data. An interesting ap-
plication of such slices is to use them for direct vol-
ume rendering. Standard texture mapping hardware
can be exploited for volume rendering by blend-
ing a stack of texture-mapped slices on top of each
other [1]. These slices can be either viewport-
aligned, which requires 3D texture mapping hard-
ware [7, 17], or object-aligned, where 2D texture
mapping hardware suffices [14]. Our high-quality
filtering approach can be used to considerably im-
prove reconstruction quality of the individual slices
in both of these cases, thus increasing the quality of
the entire rendered volume.
As reconstruction kernels we have used bicubic
and tricubic B-splines and Catmull-Rom splines,
VMV 2001 Stuttgart, Germany, November 21–23, 2001
© 2000/2001 Akademische Verlagsgesellschaft Aka GmbH, Berlin. Reprinted, with
permission, from Proc. Vision, Modeling, and Visualization 2001,pp. 105-112