G. Maino and G.L. Foresti (Eds.): ICIAP 2011, Part I, LNCS 6978, pp. 534–543, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Optimal Choice of Regularization Parameter in Image
Denoising
Mirko Lucchese, Iuri Frosio, and N. Alberto Borghese
Applied Intelligent System Laboratory
Computer Science Dept., University of Milan
Via Comelico 39/41 – 20135 Milan Italy
{mirko.lucchese,iuri.frosio,alberto.borghese}@unimi.it
Abstract. The Bayesian approach applied to image denoising gives rise to a
regularization problem. Total variation regularizers have been introduced with
the motivation of being edge preserving. However we show here that this may
not always be the best choice in images with low/medium frequency content
like digital radiographs. We also draw the attention on the metric used to
evaluate the distance between two images and how this can influence the choice
of the regularization parameter. Lastly, we show that hyper-surface
regularization parameter has little effect on the filtering quality.
Keywords: Denoising, Total Variation Regularization, Bayesian Filtering,
Digital Radiography.
1 Introduction
Poisson data-noise models naturally arise in image processing where CCD cameras
are often used to measure image luminance counting the number of incident photons.
Photon counting process is known to have a measurement error that is modeled by a
Poisson distribution [1]. Radiographic imaging, where the number of counted photons
is low (e.g. a maximum count of about 10,000 photons per pixel in panoramic
radiographies [2]) is one of the domains in which Poisson noise model has been
largely adopted.
The characteristics of this kind of noise can be taken into account inside the
Bayesian filtering framework, developing an adequate likelihood function which is,
apart from a constant term, equivalent to the Kullback–Leibler (KL) divergence [3, 4].
Assuming the a-priori distribution of the solution image of Gibbs type and
considering the negative logarithm of the a-posteriori distribution, the estimate
problem is equivalent to a regularization problem [5, 6]. The resulting cost function,
J(.), is a weighted sum of a negative log-likelihood (data-fit, J
L
(.)) and a
regularization term (associated to the a-priori knowledge on the solution, J
R
(.)).
Tikhonov-like (quadratic) regularization often leads to over-smoothed images and
Total Variation (TV) regularizers, proposed by [7] to better preserve edges, are
nowadays widely adopted. As the resulting cost-function is non-linear, iterative
optimization algorithms have been developed to determine the solution [3, 8]. To get