IMAGE DECONVOLUTION USING TREE-STRUCTURED BAYESIAN GROUP SPARSE MODELING Ganchi Zhang, Timothy D. Roberts, Nick Kingsbury Signal Processing Group, Depart. of Engineering, University of Cambridge, UK ABSTRACT In this paper, we propose to incorporate wavelet tree struc- tures into a recently developed wavelet modeling method, called VBMM. We show that, using overlapped groups, tree-structured modeling can be integrated into the high- performance non-convex sparsity-inducing VBMM method, and can achieve significant performance gains over the coefficient-sparse version of the algorithm. Index Terms— Image deconvolution, wavelet tree mod- eling, variational Bayesian, dual-tree complex wavelets. 1. INTRODUCTION Image deconvolution appears in many applications of image processing. The object is to estimate the clean image x from a blurred image y usually based on a linear observation model: y = Hx + n (1) where H is a M × M matrix which approximates the convo- lution, and n is additive Gaussian noise with variance ν 2 . In general, this inverse problem is highly ill-posed, i.e., the di- rect operator does not have an inverse or it is nearly singular so that its inverse is very sensitive to noise [1]. In previous works, it is found that wavelet-based tools, such as the Dis- crete Wavelet Transform (DWT), are powerful for handling this ill-posed nature [2, 3, 4]. Most of them are based on regularization or Bayesian frameworks, which largely rely on the sparsity assumptions of wavelet-based priors/regularizers due to the fact that natural images can be represented by rela- tively few coefficients in the wavelet domain [2]. In general, wavelet coefficients can often be modeled by heavy-tailed pri- ors belonging to the Gaussian scale mixture (GSM) class that captures the local dependencies among different wavelet co- efficients [3, 5]. It is also well-established that there is a strong persis- tence of large/small wavelet coefficients across scales [6, 7]. Such patterns can be well represented using a tree structure where parent-child coefficients at a certain location and ad- jacent scales are both large or small [6]. Fig. 1 depicts an example of quadtree structure that corresponds to an 8 ×8 im- age with 3-level 2D DWT decomposition. There are many 0 1 2 3 8 9 10 11 4 6 12 14 5 7 13 15 (a) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 (b) Fig. 1. (a) 8×8 image with 3-level 2D DWT decomposition. (b) quadtree structure of wavelet coefficients. methods to model this wavelet tree structure such as bivari- ate shrinkage [8], Hidden Markov Tree (HMT) [9, 10] and overlapping-group penalty [6, 11, 12]. By integrating such a tree-approximation, it has been shown to significantly im- prove the recovery performance [13]. This paper builds on a hierarchical Bayesian modeling of wavelet coefficients proposed in [14], which is derived from a group-sparse GSM model. Based on a combination of varia- tional Bayesian (VB) inference with a subband-adaptive Ma- jorization Minimization (MM) method, the VBMM algorithm in [14] effectively simplifies computation of the posterior dis- tribution and finds good solutions in the non-convex search space. In addition, VBMM has demonstrated the potential of group-sparse modeling. For instance, the real and imaginary parts of the dual-tree complex wavelet transform (DT CWT) coefficients are clustered into single groups for Bayesian in- ference [14]. However, tree-structured dependencies among wavelet coefficients were not fully utilized in VBMM in [14]. To achieve the goal of a fully group sparse solution, in this paper we propose a new image deconvolution algo- rithm which incorporates the VBMM model with wavelet tree structure. The grouping strategies “parent+1child” and “parent+4children” are explored. The experimental results show that both strategies result in significantly improved performance compared with VBMM without an imposed group structure. One important contribution of the paper is to provide a new framework which incorporates a wavelet tree structure in an empirical Bayesian derivation. The paper is organized as follows. Section 2 describes the key formulations of our model with grouping strategies. Section 3 shows the continuation strategy of the proposed al- gorithm. Experimental results are shown in Section 4.