Irradiance Preserving Image Interpolation
Andrea Giachetti
VIPS lab, Dipartimento di Informatica, Universit` a di Verona
E-mail: andrea.giachetti@univr.it
Abstract
In this paper we present a new image upscaling (sin-
gle image superresolution) algorithm. It is based on
the refinement of a simple pixel decimation followed by
an optimization step maximizing the smoothness of the
second order derivatives of the image intensity while
keeping the sum of the brightness values of each sub-
divided pixel (i.e. the estimated irradiance on the area)
constant. The method is physically grounded and cre-
ates images that appear very sharp and with reduced
artifacts. Subjective and objective tests demonstrate the
high quality of the results obtained.
1. Introduction
Single image super resolution is a hot topic in the
Computer Graphics and Image Processing communi-
ties. Upscaling algorithms are, in fact, widely applied,
for example to enhance printed images or to display
low quality images and videos on high resolution dis-
plays. As pointed out by in a recent review [15], there
are three main problems with simple kernel-based inter-
polation: (i) it creates oversmoothed images, (ii) gen-
erates jagged artifacts (iii) it is not able to guess rea-
sonable high-frequency components from the original
data. While the first problem can be reduced by apply-
ing a sharpening filter (or directly using a Lanczos ker-
nel that enhances intensity discontinuities), more com-
plex algorithms should be applied in order to reduce the
effects of the other two. Edge-directed methods adapt
the local interpolation to the estimated local edge be-
havior. They are often based on filling schemes putting
the original ones in an enlarged grid and filling the holes
with weighted averages of the neighboring pixels with
weights depending on the edge features [1, 8, 10]. They
provide images with reduced jagged artifacts, but often
oversmoothed and in some cases affected by other kind
of artifacts in high-frequency regions.
Example-based methods [4, 6, 7, 12] try learn-
ing the relationship between low-resolution and high-
resolution patches from a training set of images. This is
accomplished by reconstructing the high-resolution im-
age by merging detailed patches corresponding to the
coarse ones. These methods can provide natural and
sharp images, and are obviously able to guess reason-
able (but not necessarily correct) high-frequency com-
ponents. Their drawbacks are related to the compu-
tational complexity, the necessity of a representative
training set and the risk of having high frequency com-
ponents that do not correspond to the true scene. Learn-
ing methods can be applied to adapt interpolation coef-
ficients to the edge behavior. In the resolution synthesis
method [2], for example, low resolution pixels are first
classified in the context of a window of neighboring pix-
els. Then the corresponding high-resolution pixels are
obtained by filtering with coefficients depending on the
classification result.
Another class of methods is based on optimization
techniques. Ad hoc constraints are used to define energy
functions that should be minimized when the high reso-
lution image is, in some sense the most probable given
the low resolution one. In any case, there are several
methods that differ mostly in the way they impose edge
continuity and sharpness. In [11] a gradient profile prior
derived from the analysis of natural images and relating
gradient profiles at different scales is used to enhance
sharpness. In [9] a constraint related to the smoothness
of isophote curves is applied. In [13] the Gaussian Pont
Spread Function in the classical backprojection scheme
is locally modified according to a local multiscale edge
analysis. In [5] after the use of a grid filling scheme, the
added pixels are refined with the constraints related to
the edge curvature continuity, trying at the same time to
maximize the gradient components. In [14] the process
generating the high resolution image is explicitly mod-
eled as the recapturing of the scene model in a Bayesian
inference modeling.
The proposed method adopts an optimization
scheme that in some sense simulates the capture of
the scene with a different sensor. The idea is to up-
scale images with an integer factor assuming the con-
stancy of the irradiance incident on the original pixel
area. For this reason, the proposed approach is denoted
2010 International Conference on Pattern Recognition
1051-4651/10 $26.00 © 2010 IEEE
DOI 10.1109/ICPR.2010.543
2210
2010 International Conference on Pattern Recognition
1051-4651/10 $26.00 © 2010 IEEE
DOI 10.1109/ICPR.2010.543
2222
2010 International Conference on Pattern Recognition
1051-4651/10 $26.00 © 2010 IEEE
DOI 10.1109/ICPR.2010.543
2218
2010 International Conference on Pattern Recognition
1051-4651/10 $26.00 © 2010 IEEE
DOI 10.1109/ICPR.2010.543
2218
2010 International Conference on Pattern Recognition
1051-4651/10 $26.00 © 2010 IEEE
DOI 10.1109/ICPR.2010.543
2218