Image Denoising with a Constrained Discrete Total Variation Scale Space Igor Ciril 1 and J´ erˆomeDarbon 2 1 LMCS, IPSA 2 CMLA, ENS Cachan, CNRS, PRES UniverSud Abstract. This paper describes an approach for performing image resto- ration using a coupled differential system that both simplifies the image while preserving its contrast. The first process corresponds to a differen- tial inclusion involving discrete Total Variations that simplifies more and more the observed image as time evolves. The second one extracts some pertinent geometric information contained in the series of simplified im- ages and recovers the constrast using Bregman distances. Convergence and exact computational properties of the method rely on the discrete and combinatorial properties of discrete Total Variations. Keywords: Discrete Total Variation, Bregman Distances, Differential Inclusions, Network Flows. 1 Introduction Minimization of the Total Variation (TV) with a quadratic data fidelity term is a popular tool for performing image restoration since the seminal work of [3,23]. Although the solution has sharp boundaries it is also known that the minimizer may present a loss of contrast [17,24]. In this paper we propose an approach to address this issue within the framework of a coupled scale-space process. One popular approach to avoid the loss of contrast consists in considering robust edge-preserving priors [1,5,8,12,18,19,25] instead of TV. Such regulariza- tion terms aim at not penalizing too much large gradient that are assumed to correspond to a contour in the reconstructed image. Among these priors, the Potts model and truncated-quadratic prior are the most well-known. However, such a prior yields a non-convex optimization problems where a global minimizer cannot be generally computed in practice. Thus a local minimum or a critical point is obtained using an approximation algorithm [8]. Another approach relies an a Bregman distance procedure that is originally proposed by Osher et al. in [20]. This scheme is an iterative method that consists of minimizing a sequence of convex minimization problems where each of them refines at each step a degraded image. More precisely, the process starts from a constant image and converges toward the observed noisy image. It relies on iterating the following two steps: the normals of the levels are firstly filtered using TV while a surface is approximately fitted to these estimated normal in a second step. These two operations are iterated until the reconstructed image satisfies a I. Debled-Rennesson et al. (Eds.): DGCI 2011, LNCS 6607, pp. 465–476, 2011. c Springer-Verlag Berlin Heidelberg 2011