P. Foggia, C. Sansone, and M. Vento (Eds.): ICIAP 2009, LNCS 5716, pp. 711–720, 2009. © Springer-Verlag Berlin Heidelberg 2009 Denoising of Digital Radiographic Images with Automatic Regularization Based on Total Variation Mirko Lucchese and N. Alberto Borghese Applied Intelligent Systems Laboratory (AIS-Lab), Department of Computer Science, University of Milano, Via Comelico 39, 20135 Milano, Italy {mirko.lucchese,alberto.borghese}@unimi.it Abstract. We report here a principled method for setting the regularization parameter in total variation filtering, that is based on the analysis of the distri- bution of the gray levels on the noisy image. We also report the results of an experimental investigation of the application of this framework to very low photon count digital radiography that shows the effectiveness of the method in denoising such images. Total variation regularization leads to a non-linear op- timization problem that is solved here with a new generation adaptive first order method. Results suggest a further investigation of both the convergence criteria and/or the scheduling of the optimization parameters of this method. Keywords: Digital radiography, total variation filtering, regularization, Bayes- ian filtering, gradient descent minimization. 1 Introduction Radiographic images are produced by converting the number of X-ray photons that hit the sensor inside the area of each pixel, into a gray level. Thanks to the sensitivity of modern detectors, radiation dose is getting lower and lower: digital panoramic images are produced by a maximum photon count around 10,000 on 14 bits: an al- most one to one correspondence between the number of photons and the gray levels. This greatly increases the resolution of the imaging system, but it requires also a care- ful noise elimination to make the images most readable to the clinicians. In radiographic images noise is mainly introduced by errors in photon ount and it is therefore modeled through a Poisson distribution [1, 2]. Traditional denoising ap- proaches like linear filtering are inadequate to remove this kind of noise, as they gen- erally tend to smooth significantly the edges in the image and different approaches have been investigated. Among these, regularization theory is particularly suitable as it clearly expresses the two goals of a denoising process: obtain a function as close as possible to the measured data and penalize solutions that have undesirable properties. In this framework a cost function is written as a weighted sum of a distance between the measured and the true data and a function that penalizes “bad” solutions [3]. In the original formulation the squared function was used to express the difference between the true and the measured data and the squared of the gradient was used as a penaliza- tion term. This formulation leads to a squared cost function that is convex and calls