Numer Algor DOI 10.1007/s11075-015-0047-x ORIGINAL PAPER
Image deblurring by sparsity constraint on the Fourier coefficients Marco Donatelli1 · Thomas Huckle2 · Mariarosa Mazza1 · Debora Sesana1
Received: 22 May 2015 / Accepted: 25 August 2015 © Springer Science+Business Media New York 2015
Abstract This paper is concerned with the image deconvolution problem. For the basic model, where the convolution matrix can be diagonalized by discrete Fourier transform, the Tikhonov regularization method is computationally attractive since the associated linear system can be easily solved by fast Fourier transforms. On the other hand, the provided solutions are usually oversmoothed and other regularization terms are often employed to improve the quality of the restoration. Of course, this weighs down on the computational cost of the regularization method. Starting from the fact that images have sparse representations in the Fourier and wavelet domains, many deconvolution methods have been recently proposed with the aim of minimizing the 1 -norm of these transformed coefficients. This paper uses the iteratively reweighted least squares strategy to introduce a diagonal weighting matrix in the Fourier domain. The resulting linear system is diagonal and hence the regularization parameter can be easily estimated, for instance by the generalized cross validation. The method benefits from a proper initial approximation that can be the observed image or the The work of the first, third and fourth author has been partly supported by the Italian grant MIUR PRIN 2012 N. 2012MTE38N. Mariarosa Mazza
[email protected] Marco Donatelli
[email protected] Thomas Huckle
[email protected] Debora Sesana
[email protected] 1
Universit`a dell’Insubria, Via Valleggio 11, 22100 Como, Italy
2
Technische Universit¨at M¨unchen, Boltzmannstraße 3, 85748 Garching, Germany
Numer Algor
Tikhonov approximation. Therefore, embedding this method in an outer iteration may yield further improvement of the solution. Finally, since some properties of the observed image, like continuity or sparsity, are obviously changed when working in the Fourier domain, we introduce a filtering factor which keeps unchanged the large singular values and preserves the jumps in the Fourier coefficients related to the low frequencies. Numerical examples are given in order to show the effectiveness of the proposed method. Keywords Image deblurring · Fourier coefficients · Sparse reconstruction · Tikhonov regularization · Filtering methods Mathematics Subject Classification (2010) 65F22 · 65T50 · 65R32
1 Introduction We are interested in the following large-scale linear ill-posed inverse problem Ax + η = b,
(1) Rn
is the matrix associated to the blur operator, x ∈ is the true where A ∈ image (more precisely, x is a stack-ordered vector of an image with n pixels), η ∈ Rn is the unknown perturbation due to noise and b ∈ Rn is the observed image. The aim is to restore, as well as possible, the true image x given A and b. The structure of the matrix A depends both on the Point Spread Function (PSF), that we assume space invariant, and the strategy adopted to deal with boundary artifacts [13, 20, 21, 23]. Since the blur operator A can be very ill-conditioned, the direct solution of the linear system (1) is dominated by the noise. To avoid this drawback, a regularization technique has to be applied: for instance, we can consider the well-known Tikhonov method which solves minn Ax − b22 + α R(x) , (2) Rn×n
x∈R
where α > 0 is called regularization parameter, Ax − b22 is the data fitting term, while R(x) is the penalty term. A common choice is R(x) = Lx22 , (3) where L is typically the identity matrix I or an approximation of a derivative operator which guarantees that N (A) N (L) = {0}, where N (M) denotes the null space of the matrix M. The term R(x) should nearly disappear for x close to the true image, and it should penalize only noise components. Furthermore, R(x) can take into account known properties of the true solution like continuity or sparsity. The method (2)-(3) is called generalized Tikhonov method and is equivalent to the following linear system (A∗ A + αL∗ L)x = A∗ b, (4) ∗ where M denotes the conjugate transpose of M. If the coefficient matrix A and the matrix L∗ L are diagonalizable by the same unitary fast transform, the system (4) can be solved by a fast transform based direct method. Moreover, the regularization
Numer Algor
parameter α > 0 can be efficiently estimated, for instance by the Generalized Cross Validation (GCV) that we use in this paper. Unfortunately, the Tikhonov regularization provides oversmoothed reconstructions and it does not preserve the edges of the original image. Therefore, other penalty terms R(x) have been proposed in the literature, like the total variation regularization to preserve the edges [22], or R(x) = x1 , (5) to impose sparsity in the restored image [26]. Clearly, the solution of the resulting nonlinear problem is much more time consuming than Tikhonov regularization [24, 26]. A useful strategy to reduce such nonlinear complexity is the Iteratively Reweighted Least Squares (IRLS) [3, 25]. Recently, IRLS method has been successfully applied to deal with the 1 regularization term (5) in connection with iterative methods, like the preconditioner proposed in [14] for conjugate gradient, or the hybrid Arnoldi-Tikhonov methods proposed in [10]. IRLS method approximates the regularization term (5) by a term of the form (3) with a diagonal invertible matrix L. Despite the simplicity of L, the structure of the coefficient matrix in (4) is lost and such linear systems cannot longer be solved by (three) fast transforms. Starting from the a-priori information that every image has a sparse representation in the wavelet or Fourier coefficients (see [18]), a further class of regularization methods investigated in the last ten years considers the following regularization problem minm AW ∗ x − b22 + α (6) x 1 , x ∈C
where x = W ∗ x and W ∈ Cm×n , with m ≥ n, is a wavelet or tight-frame synthesis operator such that W ∗ W = I , cf. [4, 9]. The minimization problem (6) is usually called synthesis approach. If m = n and it holds that A = W ∗ A W , where A is the diagonal matrix containing the eigenvalues of A, then W is a unitary transform and the data fitting term in (6) becomes AW ∗ x − b22 = A x − W b22 ,
(7)
which can be easily computed only by vector operations. In this paper we consider the synthesis approach (6) where A can be diagonalized by W and IRLS method is used to approximate the regularization term x 1 based on a diagonal matrix built by using an approximation of x that can be computed with an easy and fast regularization strategy. Furthermore, we include also a filter factor to avoid spoiling the information related to large singular values. Since the Fourier transformed data vector x could exhibit some jumps, we propose a further approach designed in order to preserve discontinuity of the data. Summarizing, the presented technique exploits the properties of both blurring operator and data, and in addition it allows cheap computation of an optimal regularization parameter. For a clear presentation of our new method, we choose the Fourier domain, i.e., W is the discrete Fourier transform. Hence we consider the classical image deblurring model where the matrix A is Block Circulant with Circulant Blocks (BCCB) matrix which is diagonalizable by the Discrete Fourier Transform (DFT) [13]. Nevertheless, the same approach can be extended to other structures, as for instance to the reflective
Numer Algor
boundary conditions when the PSF is symmetric in every direction because in this case the matrix A can be diagonalized by discrete cosine transforms [20]. Combining (7) with the diagonally weighted approximation, e.g. of x 1 , and direct filtering, the problem (6) is equivalent to a diagonal linear system (see Section 4 for details) and hence can be easily solved in O(n) arithmetic operations. Thanks to the diagonal form of the resulting linear system also the estimation of the regularization parameter α can be efficiently achieved. In this paper we use the GCV, but other approaches can be applied without any effort. Finally, since the whole algorithm requires only three Fast Fourier Transforms (FFTs) plus O(n) computations, we propose an iterative refinement strategy similar to those adopted in [14]. The outline of the paper is as follows. In Section 2 we describe the classical deblurring model by generalized Tikhonov regularization with the matrix A BCCB and α estimated by GCV. Section 3 discusses the approximation of (5) by the weighted norm (3) with a diagonal matrix in order to preserve the sparsity of the computed approximation. Here, we allow also other diagonal seminorms that incorporate, e.g., filtering. Our new method is defined and analyzed in Section 4. Some numerical experiments are presented in Section 5 and conclusive remarks complete the paper in Section 6.
2 Tikhonov regularization and FFT First of all, let us recall that if we deal with the reconstruction of images with n1 × n2 pixels, the unknown x in (1) is a stack-ordered vector of length n = n1 n2 . Throughout the paper, ⊗ denotes the (Kronecker) tensor product, F = Fn2 ⊗ Fn1 ∈ Cn1 n2 ×n1 n2 is the DFT matrix with Fm = [e
−2πij k m
2 ]m−1 j,k=0 (i = −1), M = diag (λM,i ), where i=1,...,n
λM,i are the eigenvalues of M ∈ Rn×n and tr(M) is the trace of the matrix M. It is well-known that when imposing periodic boundary conditions the matrix A in (1) is a BCCB matrix. Such kind of matrices can be spectrally decomposed as follows A = F ∗ A F,
(8)
where the diagonal elements of A are obtained by applying F to the first column of A, see [13]. If even L∗ L in (4) can be diagonalized through F , that is L∗ L = F ∗ L∗ L F,
(9)
the generalized Tikhonov method becomes minn Ax − b22 + αLx22 ⇐⇒ F ∗ (∗A A + αL∗ L )F x = F ∗ ∗A F b , x∈R
and hence x = F ∗ (∗A A + αL∗ L )−1 ∗A F b.
(10)
Choice of L. As already mentioned in Section 1, the matrix L can be chosen as the identity matrix or derived by discretizations of some derivative operators (with
Numer Algor
appropriate boundary conditions [7]). In these cases L does not depend on A. A possible alternative is to define L via A, e.g. in the form A∗ A p , ρ(A) = max |λA,i |, (11) L∗ L = I − i=1,...,n ρ(A)2 with some exponent p, e.g. p = 1 (see [16, 17]). Choice of α. Finding a good choice for the regularization parameter α is a difficult task. Actually, we cannot determine the optimal α, but only give an estimation of it. A largely used method to estimate α is the GCV, see [12]. For the generalized Tikhonov method, the GCV determines the value of α that minimizes the GCV functional GTik (α) =
(I − AA†reg )b22 (tr(I − AA†reg ))2
,
(12)
where A†reg = (A∗ A+αL∗ L)−1 A∗ is the Tikhonov regularized pseudoinverse. When A and L∗ L can be both diagonalized through F the GCV functional becomes n 2 b22 (I − A (∗A A + αL∗ L )−1 ∗A ) i=1 (μi bi ) = , (13) GTik (α) = (tr(I − A (∗A A + αL∗ L )−1 ∗A ))2 ( ni=1 μi )2 b = F b, cf. [13]. Therefore, the evaluawhere μi = λL∗ L,i /(|λA,i |2 + αλL∗ L,i ) and tion of GTik at a point α requires only O(n) operations assuming that the eigenvalues of A and of L∗ L have been already computed.
3 IRLS Tikhonov regularization In [14] the authors introduced a data based regularization technique to improve the reconstruction of the generalized Tikhonov method by incorporating the values of the image in the penalty term by IRLS method. More precisely, given an approximation y of the true image, e.g. computed by applying the generalized Tikhonov method with L = I , the idea that underlies this technique is to construct the following diagonal matrix Dy ∈ Rn×n ⎧ ⎨ |yi | if |yi | > ε (Dy )ii = , i = 1, . . . , n, (14) ⎩ ε otherwise −1
with 0 < ε 1, in order to guarantee the invertibility of Dy , and to choose L = Dy 2 in the generalized Tikhonov method. In detail, the problem to be solved becomes
− 12 2 2 minn Ax − b2 + αDy x2 . (15) x∈R
Note that, for x ∈ Rn , such that |xi | > , i = 1, . . . , n, the following equality holds −1
Dx 2 x22 = x ∗ Dx−1 x = x1 .
Numer Algor
Hence the reason for the aforementioned choice of L lies in the fact that −1
y ≈ x ⇒ Dy 2 x22 ≈ x1 .
(16)
Therefore, the regularization term in (15) is a good approximation of the regularization term (5) whenever y is a good approximation of x. Several numerical tests show that the approximation (16) of x1 is not very sensitive to the choice of ε. In other words, this method is a particular generalized Tikhonov method where the minimum problem (15) is equivalent to the linear system (A∗ A + αDy−1 )x = A∗ b,
(17)
that, as shown in [14], for sparse images provides better reconstructions than the generalized Tikhonov method with L = I . A drawback of this technique is that the diagonal matrix Dy−1 prevents the use of fast transform based algorithms for solving the linear system (17). In order to improve the restoration starting from an approximation y, one of the methods known in the literature is the iterated Tikhonov regularization [8] (18) minn Ax − b22 + αx − y22 , x∈C
where the jumps are nearly removed by considering the difference between the current approximation x and the previous one y. Exploiting the same idea of the method (18), we can consider the following minimization problem (19) minn Ax − b22 + αDy−1 x22 , x∈C
which can be seen as a multiplicative iterated Tikhonov regularization with the difference replaced by the quotient. Note that (19) maps the data vector Dy−1 x approximately on the vector 1 = (1, . . . , 1)T which is smooth and not sparse, so the method (19) should be used in connection with a further (filtering) factor L that removes the regularizing effect for smooth vectors, e.g. the discretization of the derivatives or an operator dependent matrix like (11). Hence, (19) allows to treat the data as a smooth and continuous vector and to apply related regularization techniques [16, 17].
4 IRLS Tikhonov regularization in the Fourier domain In this section we show how to apply the IRLS strategy for Tikhonov regularization in the Fourier domain. The aim is to work with generic images, that are not necessarily sparse, preserving at the same time the computational efficiency of the FFT. More precisely, in spite of solving (15) in the space domain, we take advantage of the wellknown sparsity of the Fourier coefficients of an image (see the example below and [18] for more details) and solve it in the Fourier domain. Example (sparsity of the Fourier coefficients of an image). In Fig. 1a we represent the moduli of the Fourier coefficients of the non-sparse cameraman image (see Fig. 4) as a surface plot obtaining a surface which is almost squashed on the plane
Numer Algor
Fig. 1 a Moduli of the Fourier coefficients of the cameraman image of size 256 × 256 represented as a surface plot; b North-West corner of size 32×32 of the moduli of the Fourier coefficients of the cameraman image displayed as an image; c South-East corner of size 32 × 32 of the moduli of the Fourier coefficients of the cameraman image displayed as an image plot
z = 0. In other words, if that moduli are displayed as a scaled image the resulting image is, except for very few pixels in the corners, black everywhere. To highlight the presence of non zero pixels in the corners, in Fig. 1b and c we show a zoom of the North-West and South-East corners, respectively. 4.1 Tikhonov regularization and filtering with Sparse Fourier coefficients (TSF) Regularization. Let x = F x, x ∈ Cn be the Fourier coefficients of x. Reformulating the problem (15) in terms of x instead of x, it becomes
− 12 2 x − b22 + αD x minn AF ∗ (20) 2 , y x ∈C
where y is an approximation of x and y = Fy are the Fourier coefficients of y. To come back in the space domain it is sufficient to apply an inverse transform to x . The initial approximation y can be the computed solution of (20), i.e., x = F ∗ computed, for example, by the Tikhonov method. If y is a good approximation of x, since y − x2 = y − x 2 , then also y is a good approximation of x and hence the regularization term satisfies −1
2 D x 22 ≈ x 1 . y
It follows that the problem (20) is an approximation of the regularization problem by the synthesis approach in (6), where W = F . Filtering. In the previous approach we have introduced a minimization problem in which the penalty term acts by modifying also the large singular values of the matrix A, that are those associated to the low frequencies which are less sensitive to noise. To preserve the main features of the solution, the large singular values should remain almost unchanged. In order to avoid this unwanted alteration of the low frequency components, we introduce a filtering factor constructed as follows. Looking
Numer Algor
(a)
(b)
Fig. 2 L∗ L for L∗ L as in (24)
at Fig. 1a we can observe that most of the entries of x are equal to zero, that is x is sparse. The nonzero elements are located at the four corners, corresponding to the low frequencies (those that we want to preserve). Let us partition the set of indices {1, 2, . . . , n} = L ∪ H, with L ∩ H = ∅, where L and H contain the indices associated to the low and high frequencies, respectively. Therefore, we define as xHF the vector with the Fourier coefficients of x associated to the indices in H, and zero elsewhere (i.e. for the indices in L). Exploiting this notation we can introduce a filtering factor in (20) in order to obtain an approximation of xHF 1 instead of x 1 , that is −1
2 x 22 ≈ xHF 1 . D y
This can be achieved setting as a diagonal matrix such that 0, i ∈ L ()ii ≈ , i = 1, . . . , n. 1, i ∈ H
(21)
For x ∈ Cn , such that | xi | > , i = 1, . . . , n, then it holds −1
2 x 22 ≈ xHF 1 , D x
(22)
where, if the approximation in (21) is an equality, then also those in (22) becomes an 1
equality. By setting = L2 ∗ L , where L is a matrix chosen as in Section 2 (L = I ) for which L∗ L can be diagonalized through F as in (9), we have precisely that the unwanted modification of the large singular values of A caused by the large coefficients of the image in the Fourier domain (those in the four corners of the 2D array like in Fig. 1a associated with the low frequencies) is removed. Similarly to what we have done in the spatial domain in which we considered (19) in place of (15), we can change the exponent in the penalty term of (20) and pass −1
−1 2 2 from D x 22 to D x 2 . This means that we do not approximate x 1 , instead y y we use a norm that avoids penalizing the discontinuities in the Fourier coefficients. Regarding the filtering factor , also in this case the aim is to switch off the regularization for the large singular values of A and preserve the jumps (nonsmoothness) in
Numer Algor
the nonzero Fourier coefficients of the image (related to low frequencies); then we can use defined as above. Summarizing, the generalized version of the method (20) becomes
1 −q ∗ 2 2 2 x − b2 + αL∗ L D x 2 , minn AF (23) y x ∈C
where in the following we consider both cases by using q = Actually, rather than (11) we consider ∗ p A A ∗ L L=I− , ρ(A)2
1 2
or q = 1.
(24)
since for 0 < p < 1 it involves a larger set of indices L. Note that for p = 1 the matrix (24) coincides with (11). In Fig. 2 we show the eigenvalues of L∗ L defined as in (24) with A as the BCCB matrix associated to the GaussianBlur420 PSF of Example 2 in Section 5. As highlighted in Fig. 2a, when p = 1 the matrix L∗ L behaves almost like the identity matrix and the set L contains few indices, while 1 looking at Fig. 2b it is clear that for p = 16 the cardinality of L increases. The optimal choice of the parameter p is not within the aims of this paper, so for all the 1 numerical tests of Section 5 we fix p = 16 . When the blurring matrix A can be diagonalized by F like in (8), the problem (23) can be reformulated as
1 −q 2 x − b22 + αL2 ∗ L D x (25) minn A 2 , y x ∈C
where b = F b. Note that if L is such that N (A) N (L) = {0}, like for the generalized Tikhonov method, then N (A ) N (L∗ L ) = {0} and hence 1 −q 2 N (A ) N L∗ L D = {0}, y which guaranties the well-posedness of (25). We observe that the problem (25) is equivalent to the linear system −2q b, (∗A A + αL∗ L D x = ∗A y )
which is diagonal and therefore very easy to solve in O(n) operations. Furthermore, the GCV functional (12) for the problem (25) becomes GTSF (α) =
b22 (I − A (A )†reg ) (tr(I − A (A )†reg ))2
,
with −2q
−1 ∗ (A )†reg = (∗A A + αL∗ L D y ) A .
Numer Algor
Similarly to the form of the GCV functional in (13) for the generalized Tikhonov method, it holds n 2 i=1 (ξi bi ) GTSF (α) = , (26) ( ni=1 ξi )2 where 2q
ξi =
λL∗ L,i /(D y )ii
2q
|λA,i |2 + αλL∗ L,i /(D y )ii
.
Summarizing our TSF algorithm is characterized by the following steps: x = TSF(A, L, b, q, y) 1. 2. 3. 4. 5. 6.
Compute A and L∗ L by two FFTs b = Fb Compute D y by (14) αGCV = minα∈R GTSF (α), with GTSF (α) in (26) −2q −1 ∗ x = (∗A A + αGCV L∗ L D y ) A b x = F ∗ x
(27)
Here, we can choose L as in Section 2 or as in (24). Note, that our TSF algorithm requires four FFTs like generalized Tikhonov, the further computational cost is of O(n) operations at the points 3.–5.. 4.2 TSF with Outer Iterations (ROI) Following the Regularization with Outer Iterations (ROI) proposed in [14], we can embed the TSF method (23) in an outer iteration. In other words, we start an iterative process of building diagonal regularization matrices based on the current reconstruction ⎧ (0) x = y ⎨
1 , s = 1, . . . . 2 + α (s) 2 D −q 2 (s) ∗ x − b x = arg min x AF ⎩ L∗ L 2 2 x (s−1) x ∈Cn
with q ∈ 12 , 1 . Clearly, the first iteration of the above scheme coincides with the TSF method. As a consequence of this embedding, we observe further improvement, especially in the first iterations, while in the later steps the error saturates and the reconstruction does not change evidently. Therefore, we can stop the iterations when the relative difference between two consecutive approximations becomes smaller than a fixed tolerance. Furthermore, we add a safe guard control on the residual norm so as to avoid unexpected behaviors in the further iterations due, e.g., to an inaccurate estimation of the regularization parameter.
Numer Algor
When the blurring matrix A can be diagonalized by F like in (8), the problem (23) can be reformulated as (25) and hence the ROI algorithm is characterized by the following steps: x = ROI(A, L, b, q, x (0) ) 1. 2. 3. 4. 5.
Compute A and L∗ L by two FFTs b = Fb x (1) = TSF(A, L, b, q, x (0) ) (1) x − b2 r1 = A s=1 Repeat Compute D x (s) by (14) (s) αGCV = minα∈R GTSF (α), with GTSF (α) in (26) −2q −1 ∗ (s) x (s+1) = (∗A A + αGCV L∗ L D ) A b x (s) rs+1 = A x (s+1) − b2 s =s+1
(28)
x − x 2 until < 10−2 or rs−1 < rs x (s−1) 2 ∗ (s) x 6. x = F (s)
(s−1)
Again, the filter L can be chosen as in Section 2 or as in (24).
5 Numerical results In the following we provide some numerical tests for the image deblurring problem of type (1). To test the validity of the reconstruction provided by the TSF and ROI algorithms, we compare the computed solutions with the generalized Tikhonov method. We consider the following different choices of the regularization matrix L: 1. the identity matrix I ; 2. the 2D approximation of the first derivative (1) Lder = L(1) n2 ⊗ In1 + In2 ⊗ Ln1 ,
(29)
⎤ 1 −1 ⎥ .. .. 1⎢ ⎥ ⎢ (1) . . Lk = ⎢ , ⎥ 2⎣ 1 −1 ⎦ −1 1 k×k is the scaled finite difference approximation of the first derivative with periodic boundary conditions. Note that L∗der Lder can be easily computed by a 2D FFT applied to the stencil ⎤ ⎡ 0 −1 0 1⎣ −1 4 −1 ⎦ 8 0 −1 0
where
⎡
Numer Algor Table 1 Notation of the different algorithms with different regularization matrices L=I
L = Lder
L as in (24)
Tikhonov
TIKid
TIKder
TIKop
TSF
TSFid
q
TSFder
TSFop
ROI
q ROIid
q ROIder
q ROIop
Method
3.
q
q
properly padded by zeros to obtain the size n1 × n2 of the observed image. 1 the operator dependent matrix (24) with p = 16 .
Of course, other matrices L could be considered (see [6] and references therein) with the only constraint that L∗ L has to be diagonalized by F . To fix the notation of the different methods with different regularization matrices, we refer to Table 1, in which the subscripts id, der, op denote the choice of L as in 1., 2., 3., respectively, while the superscript q highlights the choice of the exponent of the diagonal matrices −2q −2q D and D in the TSF (27) and ROI (28) algorithms, respectively. y x (s) Both TSF and ROI algorithms need an initial guess consisting of the Fourier coefficients of an approximation of the true image. In all examples we decide for the Fourier coefficients of the solution provided by TIKder since, as shown in the following, it performs better than the other two Tikhonov methods. In all considered examples we set ε = 10−8 in (14). We have tested several values for ε with different problems, but the results were always comparable, showing that TSF and ROI algorithms are robust with respect to changes of ε. The regularization parameter α is estimated by minimizing the GCV functional (13) for Tikhonov and (26) for TSF (see algorithm (27)). Assuming to know the true image x, we measure the quality of the reconstruction by computing the Peak Signal-to-Noise Ratio (PSNR) defined as PSNR := 10 log10
(a)
(b)
2552 , MSE(x, x)
(c)
Fig. 3 Example 1 - a true image 128 × 128; b out-of-focus PSF 17 × 17; c blurred and noisy image
Numer Algor Table 2 Example 1 - Comparison between PSNRs and regularization parameters for different methods β =id
β =der
β =op
Method
PSNR
αGCV
PSNR
αGCV
PSNR
αGCV
TIKβ
65.512943
6.20e-005
67.432599
6.20e-005
66.642366
6.20e-005
TSFβ
68.822621
1.49e-004
69.825451
2.36e-004
68.613248
3.43e-004
TSF1β
69.303950
1.60e-003
70.076095
2.36e-003
69.120838
3.79e-003
1 2
x F where x is the current approximation of x, while MSE(x, x ) = x− , with ·F the n2 Frobenius norm. The higher is the value of the PSNR, the better is the reconstruction x . The numerical tests have been developed with Matlab R2011b on a PC Intel CoreTM i5 and Windows 7 operating system.
(a)
(b)
(c)
(d)
Fig. 4 Example 1 - restored images
Numer Algor Table 3 Example 1 - PSNRs for the ROI 1
1
ROIid2
1
2 ROIder
2 ROIop
Iter
PSNR
αGCV
PSNR
αGCV
PSNR
αGCV
1
68.822621
1.49e-004
69.825451
2.36e-004
68.613248
3.43e-004
2
69.000796
1.82e-004
70.377213
2.95e-004
68.686781
4.30e-004
3
-
-
70.545766
2.79e-004
-
-
5.1 Example 1 We start with the test problem blur taken from [11] with true image of size 128×128 (see Fig. 3), symmetric out-of-focus PSF ([13]) of size 17×17 and noise level 5·10−4 . In Table 2 we compare the PSNRs for all methods described in Table 1. Let us observe that TIKder is more accurate than TIKid or TIKop ; this justifies our choice of the initial guess as the Fourier coefficients of the solution computed by that Tikhonov method. Regarding our TSF algorithm, it provides a good improvement of the Tikhonov solution for all choices of the regularization matrix (compare, e.g., Fig. 4a with Fig. 4b and c). Note that also when q = 1, that is when in the penalty −1
−1 2 term the diagonal matrix D y , the TSF method works well and y is replaced by D the corresponding PSNR is larger than the one in Tikhonov. In Table 3 we show the effect of embedding the TSF method in an outer iteration. The ROIs that stop at first iteration are not reported since they coincide with the TSF algorithm. From Table 3 we deduce that TSF1id , TSF1der and TSF1op do not benefit of 1
1
1
2 2 such an embedding, while ROIid2 , ROIder , ROIop improve the quality of the corre1
2 sponding TSF reconstruction performing 2/3 iterations. Let us observe that TSFder provides a PSNR which is larger than the PSNRs corresponding to the last iteration 1
1
1
2 2 of ROIid2 and ROIop and that the best reconstruction is given by ROIder (compare, e.g.,
(a)
(b)
(c)
Fig. 5 Example 2 - a true image 256×256; b GaussianBlur420 PSF 256×256; c blurred and noisy image
Numer Algor Table 4 Example 2 - Comparison between PSNRs and regularization parameters for different methods β =id
β =der
β =op
Method
PSNR
αGCV
PSNR
αGCV
PSNR
αGCV
TIKβ
24.798034
5.36e-005
25.192810
5.36e-005
24.958256
5.36e-005
TSFβ
25.942006
5.36e-005
26.108259
5.36e-005
25.994444
5.36e-005
TSF1β
26.288431
5.36e-005
26.387301
5.36e-005
26.315813
5.36e-005
1 2
Fig. 4a, b and c with Fig. 4d). This could be ascribed to the fact that for q = 12 the penalty term is approximating the 1 -norm of the components xHF of x in the high 1
frequencies, that is L2 ∗
der Lder
−1
2 x 22 ≈ D xHF 1 . x (s)
(a)
(b)
(c)
(d)
Fig. 6 Example 2 - restored images
Numer Algor Table 5 Example 2 - PSNRs for the ROI 1
1
ROIid2
1
2 ROIder
2 ROIop
Iter
PSNR
αGCV
PSNR
αGCV
PSNR
αGCV
1
25.942006
5.36e-005
26.108259
5.36e-005
25.994444
5.36e-005
2
26.187353
5.36e-005
26.385167
5.36e-005
26.244349
5.36e-005
3
26.299611
5.36e-005
26.521393
5.36e-005
26.361031
5.36e-005
4
-
-
26.591178
5.36e-005
-
-
ROI1id
ROI1der
ROI1op
Iter
PSNR
αGCV
PSNR
αGCV
PSNR
αGCV
1
26.288431
5.36e-005
26.387301
5.36e-005
26.315813
5.36e-005
2
26.568309
5.36e-005
26.645156
5.36e-005
26.590811
5.36e-005
3
26.634729
5.36e-005
-
-
26.594446
1.03e-001
5.2 Example 2 We consider the cameraman deblurring problem of size 256 × 256 in Fig. 5. The nonsymmetric GaussianBlur420 PSF of size 256 × 256 comes from Restore Tools Matlab package ([19]) and the noise level is 10−5 . As shown in Table 4, also for this example the TSF method enhances the quality of reconstruction of the TIKder method independently of the choice of the regularization matrix (see, e.g., Fig. 6a in comparison with Fig. 6b). In particular, the TSF1id , TSF1der and TSF1op provide good reconstructions and the additional ROIs entail a further improvement (refer to Table 5 and compare Fig. 6b with Fig. 6d). Note that the noise level for this example is very low and that the PSF is a Gaussian one. Moreover, let us 1 2 observe that embedding the TSFder in a ROI gives rise to 4 reconstruction-improving
(a)
(b)
(c)
Fig. 7 Example 3 - a true image 256 × 256; b Gaussian PSF 256 × 256; c blurred and noisy image
Numer Algor Table 6 Example 3 - Comparison between PSNRs and regularization parameters for different methods β =id
β =der
β =op
Method
PSNR
αGCV
PSNR
αGCV
PSNR
αGCV
TIKβ
68.939829
5.36e-005
69.660417
5.36e-005
69.137066
5.36e-005
TSFβ
69.755546
5.36e-005
69.874618
1.94e-004
69.808882
5.36e-005
TSF1β
69.840102
2.91e-004
69.836100
4.44e-003
69.836237
5.24e-004
1 2
iterations (see Fig. 6c) and the PSNR obtained at last iteration is comparable with the ones corresponding to ROI1id , ROI1der and ROI1op . 5.3 Example 3 In this example we consider the grain image of size 256 × 256 using a Gaussian PSF with standard deviation equal to 5 and setting the noise level to 10−3 (see Fig. 7). Table 6 gives evidence of a better accuracy of the TSF method with respect to 1
the TIKder , especially when the penalty term is given by L2 ∗ 1 2
1 2
der Lder
−1
2 x 22 , that is D y 1
2 perform 3 in the case TSFder . Furthermore, as shown in Table 7, ROIid and ROIop and 4 iterations, respectively, improving the quality of the corresponding TSF reconstruction. In Fig. 8 we focus on a detail of the restored images which highlights how the restorations provided by the TSF and the ROI are richer of details than the one obtained with TIKder .
5.4 Example 4 As a final example, we consider the jetplane deblurring problem of size 512 × 512 in Fig. 9 taking as PSF the nonsymmetric diagonal motion of size 29 × 29 and noise level 10−4 . As in Example 3, in order to point out the effectiveness of the TSF method, we focus on a detail of the blurred jetplane image. Note that the number 01568 on the tail of the jetplane is definitely more readable in Fig. 10b, c and d than in Fig. 10a. This behavior is confirmed by the PSNRs shown in Table 8. In particular, we deduce that, Table 7 Example 3 - PSNRs for the ROI 1
1
ROIid2
2 ROIop
Iter
PSNR
αGCV
PSNR
αGCV
1
69.755546
5.36e-005
69.808882
5.36e-005
2
69.778181
5.36e-005
69.840942
5.36e-005
3
69.787079
5.36e-005
69.849724
5.36e-005
4
-
-
69.852618
5.36e-005
Numer Algor
(a)
(b)
(c)
(d)
Fig. 8 Example 3 - restored images
(a)
(b)
(c)
Fig. 9 Example 4 - a true image 512 × 512; b PSF 29 × 29 with center in pixel (12,11); c blurred and noisy image
Numer Algor
(a)
(b)
(c)
(d)
Fig. 10 Example 4 - restored images
Table 8 Example 4 - Comparison between PSNRs and regularization parameters for different methods β =id
β =der
β =op
Method
PSNR
αGCV
PSNR
αGCV
PSNR
αGCV
TIKβ
30.123900
5.52e-005
32.523969
5.52e-005
30.882375
5.52e-005
TSFβ2
33.861210
7.78e-003
34.230730
2.76e-002
33.811442
1.49e-002
TSF1β
33.941120
1.47e+001
34.210584
3.81e+001
33.897847
2.87e+001
1
Table 9 Example 4 - PSNRs for the ROI 1
1
ROIid2
1
2 ROIder
2 ROIop
Iter
PSNR
αGCV
PSNR
αGCV
PSNR
αGCV
1
33.861210
7.78e-003
34.230730
2.76e-002
33.811442
1.49e-003
2
33.823830
1.41e-002
34.353391
4.44e-002
33.755098
2.75e-002
Numer Algor
also for this example, a good choice of the penalty term is the 2-norm approximation of xHF 1 provided by the discretization of the first derivative (29). Furthermore, as shown in Table 9, this is the only case in which the ROI gives a slight improvement performing 2 iterations (look at Fig. 10d), while in the other cases the second iteration fails in enhancing the approximation obtained with the TSF method at first iteration.
6 Conclusions and future work Exploiting the sparsity of the Fourier coefficients we extended the data based Tikhonov regularization proposed in [14], mainly suitable for sparse images, also to non sparse ones. The arising diagonal linear system allows very fast computations and the regularization parameter can be efficiently estimated by the GCV. The proposed examples show that the TSF method is more accurate than the Tikhonov method. Moreover, in spite of a Tikhonov method, the TSF method can be embedded in an outer iteration that leads to further improvement. The future work will investigate the treatment of the boundary artifacts by imposing appropriate boundary conditions, like reflective, antireflective, or other boundary conditions that lead to a matrix A that can be diagonalized by fast transforms [1, 5, 20]. Another strategy to deal with the discrete Fourier transform with reduced boundary artifacts is based on the enlargement of the domain [2, 21]. In detail, the observed image is extended with a symmetric or antisymmetric pad to obtain a periodic image of size at least 2n1 × 2n2 and then the computed solution is obtained by FFT.
References 1. Aric`o, A., Donatelli, M., Nagy, J.G., Serra-Capizzano, S.: The Anti-Reflective Transform and Regularization by Filtering. Numerical Linear Algebra in Signals, Systems, and Control. Lect. Notes Electr. Eng. 80, 1–21 (2011) 2. Bai, Z.J., Donatelli, M., Serra-Capizzano, S.: Fast Preconditioners for Total Variation Deblurring with Anti-Reflective Boundary Conditions, SIAM. J. Matrix Anal. Appl., 32–3, 785–805 (2011) 3. Bj¨orck, A.: Numerical Methods for Least Squares Problems. SIAM. Philadelphia (1996) 4. Cai, J.F., Osher, S., Shen, Z.: Linearized Bregman iterations for frame-based image deblurring. SIAM J. Imaging Sci. 2, 226–252 (2009) 5. Donatelli, M.: Fast transforms for high order boundary conditions in deconvolution problems. BIT, 50–3, 559–576 (2010) 6. Donatelli, M., Neuman, A., Reichel, L.: Square regularization matrices for large linear discrete illposed problems. Numer. Linear Algebra Appl. 19(6), 896–913 (2012) 7. Donatelli, M., Reichel, L.: Square smoothing regularization matrices with accurate boundary conditions. J. Comput. Appl. Math. 272, 334–349 (2014) 8. Engl, H.W., Hanke, M., Neubauer, A.: Regularization methods for inverse problems. Kluwer, Dordrecht (1996) 9. Figueiredo, M.A.T., Nowak, R.D.: An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process 12, 906–916 (2003) 10. Gazzola, S., Nagy, J.G.: Generalized Arnoldi-Tikhonov methods for sparse reconstruction. SIAM J. Imaging Sci. 36, 225–247 (2014) 11. Hansen, P.C.: Regularization Tools: A Matlab package for analysis and solution of discrete ill-posed problems. Numer. Algorithms 6, 1–35 (1994) 12. Hansen, P.C.: Rank-deficient and discrete ill-posed problems. SIAM. Philadelphia (1998)
Numer Algor 13. Hansen, P.C., Nagy, J.G., O’Leary, D.P.: Deblurring images: matrices, spectra and filtering. SIAM. Philadelphia (2006) 14. Huckle, T., Sedlacek, M.: Data based regularization for discrete deconvolution problems. BIT 53, 459–473 (2013) 15. Huckle, T., Sedlacek, M.: Data based regularization matrices for the Tikhonov-Phillips regularization. Proc. Appl. Math. Mech. 12, 643–644 (2012) 16. Huckle, T., Sedlacek, M.: Smoothing and regularization with modified sparse approximate inverses. J. Electr. Comput. Eng. 16 pages (2010) 17. Huckle, T., Sedlacek, M.: Tikhonov-Phillips regularization with operator dependent seminorms. Numer. Alg. 60, 339–35 (2012) 18. Mallat, S.: A wavelet tour of signal processing. Academic Press, California (1999) 19. Nagy, J.G.: Restore tools matlab package. http://www.mathcs.emory.edu/∼nagy/RestoreTools/ 20. Ng, M., Chan, R., Tang, W.C.: A fast algorithm for deblurring models with Neumann boundary conditions. SIAM. J. Sci. Comput. 21, 851–866 (1999) 21. Reeves, S.J.: Fast image restoration without boundary artifacts. IEEE Trans. Image Process 14, 1448– 1453 (2005) 22. Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D. 60, 259–268 (1992) 23. Serra-Capizzano, S.: A note on anti-reflective boundary conditions and fast deblurring models. SIAM. J. Sci. Comput. 25, 1307–1325 (2003) 24. Vogel, C.R.: Computational methods for inverse problems. SIAM. Philadelphia (2002) 25. Wohlberg, B., Rodr´ıguez, P.: An iteratively reweighted norm algorithm for minimization of total variation functionals. IEEE Signal Process. Lett 14, 948–951 (2007) 26. Wright, S.J., Figueiredo, M.A.T., Nowak, R.D.: Sparse reconstruction by separable approximation. IEEE Trans. Signal Process 57, 2479–2493 (2009)