J Sci Comput DOI 10.1007/s10915-016-0282-x
Low Rank Prior and Total Variation Regularization for Image Deblurring Liyan Ma1 · Li Xu2 · Tieyong Zeng3,4
Received: 18 November 2015 / Revised: 17 August 2016 / Accepted: 1 September 2016 © Springer Science+Business Media New York 2016
Abstract The similar image patches should have similar underlying structures. Thus the matrix constructed from stacking the similar patches together has low rank. Based on this fact, the nuclear norm minimization, which is the convex relaxation of low rank minimization, leads to good denoising results. Recently, the weighted nuclear norm minimization has been applied to image denoising. This approach presents state-of-the-art result for image denoising. In this paper, we further study the weighted nuclear norm minimization problem for general image recovery task. For the weights being in arbitrary order, we prove that such minimization problem has a unique global optimal solution in the closed form. Incorporating this idea with the celebrated total variation regularization, we then investigate the image deblurring problem. Numerical experimental results illustratively clearly that the proposed algorithms achieve competitive performance. Keywords Image deblurring · Low rank · Nuclear norm · Variational method
The research is supported by the National Natural Science Foundation of China (Nos. 61402462, 11271049, 11201455, 11671383, 61503202), RGC 211911, 12302714, and RFGs of HKBU.
B
Tieyong Zeng
[email protected] Liyan Ma
[email protected] Li Xu
[email protected]
1
Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, China
2
LSEC, Institute of Computational Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China
3
Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong
4
HKBU Institute of Research and Continuing Education, Shenzhen Virtual University Park, Shenzhen 518057, China
123
J Sci Comput
1 Introduction Image restoration, such as denoising and deblurring, is the problem of recovering a sharp image from a degraded input and is a classical inverse problem in imaging sciences. Image denoising and deblurring are the important pre-processing procedures in many image applications. The corresponding image formation model under Gaussian noise can be written as g = Au + η, (1) where u is an ideal image, g is the degraded observation, η is the Gaussian noise with zero mean and standard deviation σ , and A is a known linear operator. If A = I , the model (1) is a denoising problem. If A is a convolution operator, the model (1) is a deblurring problem. Image restoration is an ill-posed problem. There are lots of approaches which focus on how to recover the underlying image from the observed degraded one. The most popular approach is the regularization based method which imposes a constraint on the solution space. A large class of denoising and deblurring algorithms are built on the celebrated total variation (TV)-type regularization term [11,43,50] which is pixel based and has good edgepreserving property. To take the advantage of the multi-scale property of images, many image restoration algorithms [12,18] are proposed based on wavelet-type filters which can extract the multi-scale information of an image. In most cases, the wavelet-type filters based methods can achieve better recovered quality than the TV-type based method. However, both methods do not thoroughly make use of the all properties of the images. In recently years, patch-based representation of images has drawn much attention. An image has the multi-scale and multi-level structures which can not be well characterized by the pixel based methods. Also an image exhibits self-similarity property [20], the one dimensional feature extraction can not capture the redundancy of small patches inside the same image. While patch based methods exhibit excellent results in many image applications [20,22,40]. Based on the self-similarity property of an image, Buades, Coll and Morel [3] proposed the non-local means to remove Gaussian noise. The basic idea is to approximate a pixel value using the weighted mean of the other pixels which have similar structures (measured by the L 2 distance between patches) to that of the current one. Non-local means is a very effective filter in removing noise while preserving the structures of images. To better preserve sharp edges, Maleki et al. [38] proposed anisotropic nonlocal means. Gilboa and Osher [25] proposed the non-local TV regularization which combines the non-local means and TV regularization in a variational framework. Lou et al. [32] and Zhang et al. [49] studied the image deblurring problem based on the non-local TV regularization. We know that the meaning of the regularization term is to impose certain prior on images. Other than the pre-designed regularization, there are some approaches to learn the image prior from image patches. Cho et al. [14] proposed an algorithm to learn a heavy-tailed gradient distribution of natural images which adapts to the underlying texture. Zoran and Weiss [51] presented a patch-based prior via learning a Gaussian mixture model from image patches. Those learned priors perform well on image denoising and deblurring. Another kind of patch-based priors is sparse representation over learned dictionary which is a very powerful tool in computer vision field. Elad and Aharon [21] proposed the K-SVD denoising algorithm. Using the image of interest to learn an over-complete dictionary, K-SVD algorithm denoises each image patch by sparsely representing it as a linear combination of the atoms from the learned dictionary. Inspired by K-SVD algorithm, Lou et al. [31] pointed out that the coefficients of the clean image sparsely represented by the clean basis are the same as that of the blurred image sparsely represented by the blurred basis. Based on this
123
J Sci Comput
idea, Lou et al. [31] generalized the K-SVD denoising algorithm to image deblurring under Gaussian noise. Later, Hu et al. [28] proposed a blind deblurring algorithm based on the deblurring algorithm [31]. Taking K-SVD representation as a regularization term, Ma et al. [36,37] studied the deblurring problem under impulse noise and Poisson noise respectively. Although achieving good performance, K-SVD learns an over-complete dictionary which is not easy to find the global optimal solution and costs much computational time [1,21]. Inspired by dictionary learning methods and the discrete tight frame [42], Cai et al. [6] constructed the data driven tight frame for Gaussian noise removal. The computational time of learning the adaptive tight frame from an image is much less than that of learning an over-complete dictionary. Later, using a non-local scheme, Quan et al. [41] constructed the data-driven multi-scale non-local tight frame, and applied it to solve image deblurring problem. Their deblurring algorithm has shown promising behavior. By applying the collaborative filtering on the stack of similar image patches, BM3D denoising algorithm achieves state of the art results [15]. In order to generalize BM3D to solve deblurring problem, Dabov et al. [16] proposed BM3DDEB deblurring algorithm via a modification of the BM3D filter. BM3DDEB is a two-stage non-iterative algorithm. Danielyan et al. [17] proposed a variational iteration method for image deblurring based on BM3D frames. Similar to BM3D algorithm, the denoising algorithms [27,29] based on low rank minimization also remove noise from the stack of similar patches. However, the basic idea of the low rank minimization based algorithm is that the matrix re-arranged from the similar patches has low rank. Ji et al. [29] studied video denoising problem based on nuclear norm minimization. Later, Gu et al. [27] proposed the weighted nuclear norm minimization (WNNM) based image denoising algorithm which achieves state of the art results. Those successful works show that the methods based on low rank minimization can capture the underlying multi-scale structure and provide good representation for images. In this paper, we will firstly analyze image denoising based on WNNM, then study the deblurring problem based on WNNM and TV regularization. The rest of this paper is organized as follows: In Sect. 2, we review the low rank minimization problem and WNNM denoising method. In Sect. 3, we state our theoretical result with the proof, and analyze the WNNM denoising method. The proposed algorithm for image deblurring under Gaussian noise is introduced in Sect. 4. As a generalization, a modified model is proposed for deblurring under salt-and-pepper noise. In Sect. 5, we give numerical experiments to demonstrate the performance of the proposed algorithm. Section 6 concludes the paper. Our main contributions in this paper are: we give the close solution for the low rank minimization with weighted nuclear norm when the weights are in an arbitrary order; a modification of WNNM denoising method is proposed, the new denoising method (WNNMG) converges faster; we extend the weighted nuclear norm regularization to image deblurring problem.
2 Low Rank Minimization with Weighted Nuclear Norm Since the original rank minimization problem is non-convex [8,47], it is not easy to get the solution of this problem. To make a non-convex problem tractable, the convex relaxation technique is often adopted. Fazel [23] proved that the nuclear norm is the tightest convex relaxation of the rank minimization problem. Recently, Cai et al. [4] built the relationship
123
J Sci Comput
between nuclear norm and singular thresholding operator. More specifically, Cai et al. [4] proved that the solution of the following nuclear norm minimization problem: min X
1 Y − X 2F + λ X ∗ 2
(2)
is given by X ∗ = U Sλ (Σ)V T , where the parameter λ > 0. ·∗ is the nuclear norm. Y ∈ R m×n , without loss of generality, we assume that m ≥ n. Y = U Σ V T is the singular value decomposition (SVD) of matrix Y , where diag (σ1 , σ2 , . . . , σn ) Σ= , 0 and the singular value shrinkage operator is defined as Sλ (Σ) = max(0, Σ − λ). The minimization problem (2) is strictly convex. Cai et al. [4] pointed out that the singular value shrinkage operator actually is the proximal operator of the nuclear norm function. The above results have been widely generalized to solve many nuclear norm minimization based problems, such as matrix completion [4], subspace clustering [30] and video denoising [29]. Although the nuclear norm regularization has been widely adopted for the rankconstrained problems, it can not accurately describe the problems arisen in some applications. To remedy the disadvantage, more general nuclear norm regularization has been proposed recently [13,27,33,34]. The weighted nuclear norm minimization (WNNM) problem proposed in [13,27] used a smaller value to penalize the larger singular value. The underlying idea is that the larger singular value is more important and should shrink less, while the original nuclear norm minimization problem penalizes the all singular values with the same value λ. The weighted nuclear norm of a matrix X is defined as a weighted sum of its singular values: n X w,∗ = wi σi (X ), (3) i=1
where σ1 (X ) ≥ σ2 (X ) ≥ · · · ≥ σn (X ) ≥ 0, the weights vector w = [w1 , w2 , . . . , wn ], and wi ≥ 0. With the weighted nuclear norm regularization, the low rank minimization problem studied in [13,27] is 1 min Y − X 2F + X w,∗ . (4) X 2 As a nice convex relaxation for the the rank-constrained problem, the nuclear norm regularization has attracted great attention recently. However, the classical nuclear norm regularization ignores the possible prior information on singular values and shrinks each singular value equally. This seriously limits its flexibility and capability to handle many practical applications. Indeed, as pointed out in [46], in various machine vision tasks, the large singular values of the data matrix are often much more important than the smaller ones as they are related to the principle components of the data. In order to make the nuclear norm regularization more powerful, evidently one should attribute different weights for different singular values. In this regard, it is extremely important and urgent if one can find explicit
123
J Sci Comput
solution for Problem (4) which uses the weights in an arbitrary order. This is one of our main contributions. Chen et al. [13] found the global optimal solution of (4) under the order constraints 0 ≤ w1 ≤ w2 ≤ · · · ≤ wn . They proved that the global optimal solution of (4) is given by X ∗ = U DV T , where
D=
(5)
diag (d1 , d2 , . . . , dn ) , 0
and di = max(σi − wi , 0), i = 1, . . . , n. Further, if all the nonzero singular values of Y are distinct, then X ∗ is the unique optimal solution. For more general case, Xie et al. [46] showed that (d1 , d2 , . . . , dn ) is the solution of the following convex optimization problem: min
d1 ,...,dn
n 1 i=1
2
(di − σi )2 + wi di , s.t. d1 ≥ d2 ≥ · · · ≥ dn ≥ 0.
(6)
And they also verified that the globally optimal solution of the problem (6) has a closed form when the weights satisfy 0 ≤ w1 ≤ w2 ≤ · · · ≤ wn . Based on the above theoretical results, Gu et al. [27] applied WNNM to Gaussian noise removal problem, and achieved state of the art results. When A = I in model (1), the noisy image f can be obtained by f = u + η. It is well-known that the images have self-similarity property which has been explored in recently literature [3,20]. Given a patch x j located at j in the image u, we can search all similar patches in the image u itself via block matching. Then we stack all the similar patches into a matrix to get X j . According to the non-local similarity of images, all column vectors in X j have similar image structures. This means the rank of X j is low. Thus we can get the estimation of the clear image with a low rank constraint. This idea was also used in video denoising [29]. At each location, we can rewrite the image degradation model as Yj = X j + Nj, where Y j is constructed as above from f , X j and N j are the corresponding matrix constructed from u and η. Then the estimation of X j can be obtained via solving the following minimization problem: 2 1 min Y j − X j F + X j w,∗ . Xj 2 Then the whole clear image can be estimated via aggregating all the denoised patches. The setting of the weights w is very important in WNNM denoising method. Gu et al. [27] defined the weight for the ith singular of X j as √ c N sp wi = , σi (X j ) + ε where c > 0 is a constant, Nsp is the number of the similar patches in Y j . Hence, the weights satisfy 0 ≤ w1 ≤ w2 ≤ · · · ≤ wn . Thus, the recovered image can be obtained via the singular value shrinkage.
123
J Sci Comput
3 Analysis of WNNM In this section, we firstly give some theoretical analysis of WNNM, then present a modification of WNNM denoising method. Rewrite the minimization (6), we get min
d1 ,...,dn
n 1 i=1
2
(di − (σi − wi ))2 , s.t. d1 ≥ d2 ≥ · · · ≥ dn ≥ 0.
(7)
Actually, the order of {(σi − wi )}i is the key condition which determines whether the minimization problem (6) has a closed form solution. This conclusion can be easily proved following the proof in [46]. Next, we consider to find the solution of the minimization (6) when {(σi − wi )}i are in an arbitrary order. By this, we give the close solution for the low rank minimization problem (4). For simplification, we denote by ai = σi − wi . The result is stated as the following theorem. Theorem 1 Assume that (a1 , . . . , an ) ∈ R n and n ∈ N. Then the following minimization problem n min (di − ai )2 , (8) d1 ≥···≥dn ≥0
i=1
has a unique global optimal solution in the closed form. Moreover, for any k ∈ N, we assume that Mk = (d1 , . . . , dk ) is the solution to the kth problem of (8) (that is (8) with n = k), in particular, we denote by dk∗ = dk the last component of Mk and d0∗ = +∞. If n k=s ak ∗ s0 := sup s ∈ N | ds−1 , s = 1, . . . , n , (9) ≥ n−s+1 then the solution Mn = (d1 , . . . , dn ) to (8) satisfies (d1 , . . . , ds0 −1 ) = Ms0 −1 and ds0 = · · · = dn = max
n k=s0
ak
n − s0 + 1
,0 .
(10)
Furthermore, the global solution of (4) is given in (5) with diag (d1 , d2 , . . . , dn ) D= . 0 Proof The uniqueness is clear since the objective function in (8) is strictly convex. We should employ the method of induction to prove the existence result of (8). Firstly, for n = 1, (8) has a unique solution d1 = max{a1 , 0}. Assume that for any k ≤ n − 1, the kth problem of (8) has a unique solution Mk = ∗ (d1 , . . . , dk ). Then to solve the nth problem (8), we compare dn−1 and an as follows: ∗ ≥ an , then dn = max{an , 0} and (8) is solvable. Moreover, the solution Mn – if dn−1 satisfies (10) with s0 = n; ∗ – if dn−1 < an , we take d = dn−1 = dn and we have to solve the following reduced (n − 1)th problem n−2
an−1 + an 2 2 ) . (di − ai ) + 2(d − min (11) d1 ≥···≥dn−2 ≥d≥0 2 i=1
123
J Sci Comput ∗ For the latter case, we have to solve (11). To do so, we compare dn−2 and similar way as above:
an−1 +an 2
in the
∗ ≥ an−12+an , then dn−1 = dn = d = max{ an−12+an , 0} and (8) is solvable. – if dn−2 Moreover, the solution Mn satisfies (10) with s0 = n − 1; ∗ – if dn−2 < an−12+an , we take d = dn−2 = dn−1 = dn and we have to solve the following reduced (n − 2)th problem n−3
an−2 + an−1 + an 2 2 min (di − ai ) + 3(d − ) . (12) d1 ≥···≥dn−3 ≥d≥0 3 i=1
We shall repeat the above arguments and stop at the following s0 th problem s −1
n 0 k=s0 ak 2 2 min (di − ai ) + (n − s0 + 1)(d − ) . d1 ≥···≥ds0 −1 ≥d≥0 n − s0 + 1
(13)
i=1
where d = ds0 = · · · = dn and s0 is defined in (9). Since ds∗0 −1 ≥ n
n
k=s0 ak n−s0 +1 ,
we have
k=s0 ak n−s0 +1
, 0). Then (8) is solvable and the solution Mn satisfies ds0 = · · · = dn = d = max( (10). Thus, it completes the proof of the theorem. As introduced in Sect. 2, WNNM denoising method uses the singular value shrinkage to get the recovered image. Actually, the singular value shrinkage utilizes the common soft shrinkage method to every singular value. However, contrary to the hard shrinkage estimator, the soft shrinkage estimator tends to have smaller variance but bigger bias because of shrinking all the entries. As a good compromise, the non-negative Garrote threshold shrinkage function [2,24] was proposed. For τ > 0, s > 0, the general non-negative Garrote threshold function [48] is defined as τs G Sτ (t) = t max 0, 1 − s . |t| We use this function to replace the soft shrinkage in WNNM denoising algorithm (the resulting algorithm is denoted as WNNMG). In fact, this modification can be considered as that we use another assignment of the weights. For example, applying the general non-negative Garrote threshold function to the ith singular value of Y j , we get
(wi )s G Swi (σi (Y j )) = σi (Y j ) max 0, 1 − , s σi (Y j )
(wi )s = max 0, σi (Y j ) − . s−1 σi (Y j ) Thus, the new weight
(wi )s (σi (Y j ))s−1
depends on both σi (Y j ) and σi (X j ), and the order of the
weights doesn’t change. In this paper, we set s = 2. To demonstrate the behavior of WNNMG, we give a simple experiment. We apply the WNNM and WNNMG on the noisy Cameraman, and use the difference between the estimated result u and the exact solution u ∗ as the measure similar to [26]. In Fig. 1, we give the error (log(u − u ∗ 2 )) at each iteration. Obviously, although both algorithms achieve the same final result, WNNMG converges faster at the first steps.
123
J Sci Comput Fig. 1 Convergence behavior of WNNM and WNNMG on image Cameraman corrupted by Gaussian noise with standard deviation σ = 20
8.8 WNNM WNNMG
8.6 8.4 8.2 8 7.8 7.6 7.4 0
1
2
3
4
5
6
7
8
4 The Proposed Algorithm for Image Deblurring In this section, we will extend the nuclear norm regularization to image deblurring problem. Firstly, we give an algorithm for deblurring under Gaussian noise. As a generalization, we then present an algorithm for deblurring under salt-and-pepper noise.
4.1 Deblurring Under Gaussian Noise The proposed model is min u
R j u j∈P
w,∗
+ ∇u1 +
λ Au − g2F , 2
(14)
where P denotes the set of indices where small image patches exist. The operator R j firstly collects the similar patches of the reference patch located at j, and then stacks those patches into a matrix which should be low rank. The second term is the total variation (TV) regularization [43] which is pixel based. TV regularization can locally smooth the image and has edge-preserving property. The last term in (14) is the data-fidelity term often used for deblurring under Gaussian noise. Solving this model is not easy since it combines the patch-based and pixel-based terms. Thus, we introduce an auxiliary variable v to split the model to R j v + ∇u1 + λ Au − g2 , s.t. v = u. min (15) F w,∗ u,v 2 j∈P
Then using the split Bregman method [26] to solve the model (15), we get ⎧ R j v + ∇u1 + λ Au − g2 + α u − v − bk 2 , ⎨ u k+1 , v k+1 = min F 2 2 w,∗ F u,v j∈P ⎩ bk+1 = bk + v k+1 − u k+1 , (16) where the parameter α > 0, the minimizations with respect to u and v can be done in an alternating fashion: 2 λ α u k+1 = min ∇u1 + Au − g2F + u − v k − bk , (17) u F 2 2 2 R j v + α (18) v k+1 = min u k+1 − v − bk . w,∗ v F 2 j∈P
123
J Sci Comput
TV regularization is non-differentiable, there are many methods [9,10,19,44,45] contributed to how to effectively solve it. Here, we use primal-dual method to get the optimal solution. The primal-dual formulation of the minimization (17) is 2 λ α min max − u, div p − δ P ( p) + Au − g2F + u − v k − bk , (19) u p F 2 2 where P := p ∈ Rm×n × Rm×n ; p∞ ≤ 1 . We employ the first order primal-dual algorithm proposed by Chambolle and Pock [10] to get the optimal solution of (19). Then we obtain the following iterations: ⎧ 2 1 ⎪ p − pi F , pi+1 = min − ∇ u¯ i , p + δ P ( p) + 2θ ⎪ ⎨ p 2 2 1 u − ui F , u i+1 = min − u, div pi+1 + λ2 Au − g2F + α2 u − v k − bk F + 2θ ⎪ u ⎪ ⎩ i+1 = 2u i+1 − u i . u¯ By a standard calculation, we get the closed-form solution of p: pi+1 =
pi + θ · ∇ u¯ i . max 1, pi + θ · ∇ u¯ i
Meanwhile, the closed form solution of u is u i + θ div pi+1 + λAT g + α(v k − bk ) i+1 u . = 1 + θ λAT A + α
(20)
(21)
To get the optimal solution of v, we can minimize the problem (18) with respect to each similar patches group R j v respectively, then aggregate all the estimated patches to get the final solution of v. For each similar patches group R j v, we have the following minimization problem: 2 k+1 k R j v + α v − R u − R b R . j j j w,∗ F 2 Using the WNNMG to solve this problem, we have diag (d1 , d2 , . . . , dn ) k+1 =U V T, Rjv 0 where Rju
k+1
− Rjb
and di = max 0, σub,i −
k
wi2 σub,i
=U
diag σub,1 , σub,2 , . . . , σub,n V T, 0
, wi =
√ c1 N sp , σi (R j v k+1 )+ε
for i = 1, 2, . . . , n.
As a summary, we summarize the proposed algorithm in Algorithm 1. We remark that WNNM denoising method [27] and WNNMG method need several iterations to find the denoised estimation. However, in our deblurring problem, it needs only one iteration when applying WNNMG to get the optimal solution of R j v k+1 . This is because there is an outer-loop introduced by split Bregman method. In addition, when computing the
123
J Sci Comput
Algorithm 1 Algorithm for image deblurring under Gaussian noise Initialization: u¯ 0 = g, u 0 = g, p 0 = 0, b0 = 0, λ, α, θ . Output: u k+1 . for k = 1 : N O do for i = 0, 1, . . . , N I do Compute pi+1 using (20); Compute u i+1 using (21); u¯ i+1 = 2u i+1 − u i ; end for u k+1 = u i+1 ; Compute v k+1 using WNNMG; Compute bk+1 = bk + v k+1 − u k+1 ; end for
weights vector w, the singular values of R j v k+1 are unknown. Similar to WNNM denoising method [27], we estimate the initial σi R j v k+1 as
σi R j v
k+1
=
2 − c ,0 . max σub,i 2
Then we run several more rounds of this procedure to get better approximation of σi R j v k+1 .
4.2 Deblurring Under Salt-and-Pepper Noise Salt-and-pepper noise often appears in digital storage and signal transmission [5,19,37]. Image restoration under this type of noise is similar to image inpainting problem. Suppose that the noise level is r (0 ≤ r ≤ 1), the model of corruption by blur and salt-and-pepper noise at location s can be defined as: ⎧ with probability r/2, ⎨ 0, with probability r/2, gs = 255, ⎩ (Au)s , with probability 1 − r. Incorporating nuclear norm regularization and TV regularization, we give the modified model for deblurring under salt-and-pepper noise as: R j u + ∇u1 + λ Λ(Au − g)1 , min (22) w,∗ u
j∈P
where Λs =
0, if s ∈ N , 1, otherwise,
with the salt-and-pepper noise locations set N . The last term is the data-fidelity term commonly used for deblurring under salt-and-pepper noise [5,37]. Here, we also use the split Bregman method [26] to solve the model. After introducing an auxiliary variable v, we have R j v + ∇u1 + λ Λ(Au − g)1 , s.t. v = u. min (23) w,∗ u,v
123
j∈P
J Sci Comput
Fig. 2 Original images. From left to right and top to bottom: Barbara, Boat, Cameraman, Couple, Goldhill, Lena, Man, Stream
Then we get ⎧ R j v + ∇u1 + λ Λ(Au − g)1 + ⎨ u k+1 , v k+1 = min w,∗ u,v j∈P ⎩ bk+1 = bk + v k+1 − u k+1 ,
α 2
u − v − bk 2 , F
(24) where the parameter α > 0, the minimizations with respect to u and v can be done in an alternating fashion: 2 α u k+1 = min ∇u1 + λ Λ(Au − g)1 + u − v k − bk , (25) u F 2 2 R j v + α (26) v k+1 = min u k+1 − v − bk . w,∗ v F 2 j∈P
The minimization (25) is convex, while there are two terms in (25) involving L 1 norm. Thus, we use the effective primal-dual algorithm proposed in [35] to get the exact solution. The primal-dual formulation of the minimization (25) is 2 α min max − u, div p − δ P ( p) + Λ(Au − g), q − δ Q (q) + u − v k − bk , (27) u p,q F 2 where Q = q, q∞ ≤ λ . Employing the algorithm proposed in [35], we obtain the following iterations: ⎧ 2 1 ⎪ pi+1 = min − ∇ u¯ i , p + δ P ( p) + 2θ p − pi F , ⎪ ⎪ p ⎪ ⎪ ⎨ q i+1 = min − Λ(Au¯ i − g), q + δ (q) + 1 i 2 Q 2θ q − q F , q 2 2 ⎪ 1 ⎪ ⎪ u i+1 = min − u, div pi+1 + Λ(Au − g), q i+1 + α2 u − v k − bk F + 2θ u − ui F , ⎪ ⎪ u ⎩ i+1 u¯ = 2u i+1 − u i . The minimization with respect to p is discussed before, and we have closed-form solution (20). By a standard calculation, we get the closed-form solution of q: q i+1 = λ
q i + θ · Λ(Au¯ i − g) . max λ, q i + θ · Λ(Au¯ i − g)
(28)
123
J Sci Comput Table 1 Comparison of the PSNR (dB) of the recovered results by different methods, with respect to the noise level σ = 2 Image
Kernel
ROF
ForWaRD
Framelet
NLTV
BM3DDEB
Ours
Bar.
Box
23.31
24.14
24.45
23.76
24.97
24.97
Gaussian
23.43
24.01
24.31
23.57
24.40
24.49
Boat.
Cam.
Cou.
Gold.
Lena
Man
Str.
Aver.
Motion
24.05
25.01
25.93
25.64
28.02
28.42
Box
27.97
27.69
28.90
28.86
28.97
29.77 29.23
Gaussian
27.91
27.80
28.62
28.64
28.67
Motion
28.24
28.34
29.53
29.57
29.69
31.11
Box
25.44
25.66
26.43
26.29
26.75
27.83 26.17
Gaussian
24.83
25.06
25.27
24.92
25.61
Motion
26.62
26.89
28.11
28.38
28.14
30.21
Box
27.87
27.56
28.89
28.67
28.97
29.59 28.63
Gaussian
27.51
27.46
28.08
28.10
28.25
Motion
27.80
28.04
29.09
28.99
29.62
30.62
Box
27.78
28.37
29.77
28.36
29.76
30.11 30.14
Gaussian
28.02
28.52
29.50
28.25
29.55
Motion
28.07
28.85
30.17
28.92
30.47
31.22
Box
28.79
29.72
30.65
29.64
31.06
31.57 32.64
Gaussian
30.01
30.45
31.76
30.73
32.00
Motion
29.56
30.60
31.82
30.97
32.59
33.64
Box
26.74
26.42
27.55
27.26
27.57
28.21 28.54
Gaussian
27.30
26.73
27.88
27.58
27.88
Motion
27.53
27.36
28.73
28.46
28.76
30.00
Box
24.61
24.27
25.16
24.96
25.08
25.34 25.37
Gaussian
24.77
24.35
24.96
24.89
25.05
Motion
25.55
24.96
26.35
26.06
26.21
26.98
Box
26.56
26.73
27.73
27.23
27.89
28.42
Gaussian
26.72
26.80
27.55
27.09
27.68
28.15
Motion
27.18
27.51
28.72
28.37
29.19
30.28
Meanwhile, the closed form solution of u is u i + θ div pi+1 − AT (Λq i+1 ) + α(v k − bk ) i+1 u = . 1 + θα
(29)
As a summary, we summarize the proposed algorithm for deblurring under salt-and-pepper noise in Algorithm 2.
5 Numerical Experiments In this section, we firstly present some experimental results of the proposed Algorithm 1 on image deblurring under Gaussian noise. The set of test images are shown in Fig. 2. The size of test images is 512 × 512 except that of Cameraman image is 256 × 256.
123
J Sci Comput Table 2 Comparison of the PSNR (dB) of the recovered results by different methods, with respect to the noise level σ = 5 Image
Kernel
ROF
ForWaRD
Framelet
NLTV
BM3DDEB
Ours
Bar.
Box
22.73
23.57
23.79
23.12
24.03
24.05
Gaussian
23.11
23.87
24.02
23.32
24.09
24.14
Boat.
Cam.
Cou.
Gold.
Lena
Man
Str.
Aver.
Motion
23.13
23.74
24.22
23.72
25.02
25.15
Box
26.33
26.30
26.96
27.04
27.01
27.69 28.05
Gaussian
26.74
27.15
27.62
27.75
27.69
Motion
25.97
26.26
26.96
27.21
27.24
28.24
Box
24.00
23.83
24.67
24.73
24.92
25.84 25.27
Gaussian
24.17
24.28
24.62
24.64
24.79
Motion
24.28
24.15
25.15
25.32
25.67
27.02
Box
26.30
26.18
26.90
26.84
27.00
27.48 27.51
Gaussian
26.50
26.82
27.17
27.18
27.27
Motion
25.80
26.01
26.62
26.64
27.03
27.71
Box
26.32
27.39
28.06
26.88
28.24
28.52 29.15
Gaussian
26.98
28.04
28.61
27.59
28.71
Motion
26.29
27.29
28.14
27.04
28.55
28.95
Box
27.43
28.52
28.90
28.23
29.38
29.96 31.34
Gaussian
28.70
29.93
30.46
29.64
30.85
Motion
27.48
28.55
29.43
28.78
30.27
31.10
Box
25.41
25.38
26.01
25.93
26.06
26.67 27.54
Gaussian
26.37
26.35
27.00
26.94
27.01
Motion
25.62
25.55
26.43
26.37
26.61
27.53
Box
23.52
23.38
23.92
23.83
23.75
24.03 24.60
Gaussian
23.95
23.89
24.32
24.31
24.26
Motion
23.84
23.50
24.31
24.30
24.24
24.80
Box
25.26
25.57
26.15
25.83
26.30
26.79
Gaussian
25.82
26.29
26.73
26.42
26.83
27.20
Motion
25.30
25.63
26.41
26.17
26.83
27.56
Algorithm 2 Algorithm for image deblurring under salt-and-pepper noise Initialization: u¯ 0 = g, u 0 = g, p 0 = 0, b0 = 0, λ, α, θ . Output: u i+1 . for k = 1 : N O do for i = 0, 1, . . . , N I do Compute pi+1 using (20); Compute q i+1 using (28); Compute u i+1 using (29); u¯ i+1 = 2u i+1 − u i ; end for u k+1 = u i+1 ; Compute v k+1 using WNNMG; Compute bk+1 = bk + v k+1 − u k+1 ; end for
123
J Sci Comput Table 3 Comparison of the PSNR (dB) of the recovered results by different methods, with respect to the noise level σ = 10 Image Bar.
Boat.
Cam.
Cou.
Gold.
Lena
Man
Str.
Aver.
Kernel
ROF
ForWaRD
Framelet
NLTV
BM3DDEB
Ours
Box
22.40
23.15
23.31
22.66
23.49
23.58
Gaussian
22.81
23.60
23.67
23.05
23.86
23.85
Motion
22.57
23.14
23.37
22.88
23.80
23.93
Box
25.06
25.15
25.72
25.71
25.58
26.24 27.08
Gaussian
25.97
26.36
26.68
26.76
26.77
Motion
24.67
24.94
25.29
25.50
25.55
26.36
Box
23.09
22.35
23.50
23.38
23.67
24.38 24.62
Gaussian
23.55
23.43
24.04
24.11
24.25
Motion
22.89
22.26
23.43
23.52
23.92
24.81
Box
25.06
24.99
25.62
25.46
25.53
25.92 26.57
Gaussian
25.76
26.01
26.31
26.32
26.41
Motion
24.48
24.66
24.99
25.01
25.23
25.76
Box
25.43
26.47
26.97
25.72
27.08
27.36 28.10
Gaussian
26.23
27.33
27.66
26.64
27.87
Motion
25.38
26.33
26.65
25.78
27.20
27.35
Box
26.47
27.46
27.82
26.92
28.16
28.61 29.94
Gaussian
27.82
28.96
29.17
28.43
29.72
Motion
26.18
27.21
27.53
27.10
28.38
28.98
Box
24.55
24.51
24.98
24.87
24.92
25.56 26.66
Gaussian
25.61
25.72
26.12
26.12
26.17
Motion
24.42
24.38
24.87
24.81
25.01
25.73
Box
22.74
22.63
23.03
22.98
22.85
23.13 23.89
Gaussian
23.37
23.38
23.71
23.75
23.62
Motion
22.77
22.61
23.08
23.09
23.06
23.39
Box
24.35
24.59
25.12
24.71
25.16
25.60
Gaussian
25.14
25.60
25.92
25.65
26.08
26.34
Motion
24.17
24.44
24.90
24.71
25.27
25.79
In our experiments, the test image is first blurred by a kernel, followed by the addition of Gaussian noise with zero mean. Three different types of blur kernels are considered: a 9 × 9 uniform blur; a 9 × 9 Gaussian blur with standard deviation 2; a motion blur of length 15 pixels and orientation 30◦ . We also consider three different standard deviation of Gaussian noise: 2, 5, 10. The peak signal to noise ratio (PSNR) is chosen as the quantitative measure of image quality which is defined as: PSNR = 20 log10
255 1 mn u˜
− u F
where u˜ is the recovered image and u is the true image.
123
,
(30)
J Sci Comput
Fig. 3 Recovered results (with PSNR (dB) of different methods on image Cameraman corrupted by 9 × 9 uniform blur and Gaussian noise with standard deviation σ = 5. a Original image. b Degraded: 20.57. c ROF: 24.00. d ForWaRD: 23.83. e Framelet: 24.67. f NLTV: 24.73. g BM3DDEB: 24.92. h Our: 25.84
To demonstrate the performance of the proposed algorithm, we compare it to five related existing image deblurring methods. To obtain the restoration results of those compared methods, we use the codes provided by the authors respectively. ROF This model [43] is the TV based model which is not easy to solve. We use the method proposed by Zhang [49] to solve the ROF model. If the kernel type is the uniform blur or Gaussian blur, the parameter μ is set to be 4, 15, 25 when the noise standard deviation is 2, 5, 10, respectively. If the kernel type is the motion blur, the parameter μ is set to be 8, 15, 40 when the noise standard deviation is 2, 5, 10, respectively. ForWaRD This method [39] has two stages: firstly removing blur by Wiener filter, then removing the Gaussian colored noise via wavelet based method. In experiments, we use the default parameter setting provided by the authors in their codes.
123
J Sci Comput
Fig. 4 Recovered results (with PSNR (dB) of different methods on image Goldhill corrupted by 9×9 Gaussian blur with standard deviation σ = 2 and Gaussian noise with standard deviation σ = 5. a Original image. b Degraded: 26.34. c ROF: 26.98. d ForWaRD: 28.04. e Framelet: 28.61. f NLTV: 27.59. g BM3DDEB: 28.71. h Our: 29.15
Framelet based method This method [7] uses the linear spline wavelet frame system u i+1 −u i
to sparsely approximate the images. We use u i+1 2 < 10−4 as the stopping criterion. 2 The choice of framelet is fixed to be the 4-level piecewise linear framelet. The parameter λ = 0.05. If the kernel type is the uniform blur or Gaussian blur, the parameter μ is set to be 0.04, 0.12, 0.3 when the noise standard deviation is 2, 5, 10, respectively. If the kernel type is the motion blur, the parameter μ is set to be 0.04, 0.2, 0.45 when the noise standard deviation is 2, 5, 10, respectively. NLTV based method Zhang et al. [49] solved the NLTV model via preconditioned Bregmanized operator splitting. The resulting algorithm is efficient than the usually NLTV based algorithms. If the kernel type is the uniform blur or Gaussian blur, the parameter μ is set to
123
J Sci Comput
Fig. 5 Recovered results (with PSNR (dB) of different methods on image boat corrupted by motion blur of length 15 pixels and orientation 30◦ and Gaussian noise with standard deviation σ = 5. a Original image. b Degraded: 22.92. c ROF: 25.97. d ForWaRD: 26.26. e Framelet: 26.96. f NLTV: 27.21. g BM3DDEB: 27.24. h Our: 28.24 0.05 0.045 0.04 0.035 0.03 0.025 0.02 0.015 0.01 0.005 0
8.8
26
8.7
25
8.6
24
8.5 23 8.4 22
8.3
21
8.2 0
(a)
50
100
150
200
250
8.1
0
(b)
50
100
150
200
250
20 0
50
100
150
200
250
(c)
Fig. 6 Convergence behavior of Algorithm 1 on image Cameraman corrupted by 9 × 9 uniform blur and Gaussian noise with standard deviation σ = 5. In (b), u ∗ is the exact solution. a u i+1 −u i 2 /u i+1 2 . b log(u−u ∗ 2 ). c PSNR
123
J Sci Comput
Fig. 7 Recovered results (with PSNR (dB) of different methods on image Cameraman corrupted by 7 × 7 Gaussian kernel with standard deviation 5 and salt-and-pepper noise with noise level 30 % (first row), 50 % (second row), and 70 % (third row). a Degraded: 10.11. b [5]: 37.65. c [37]: 38.90. d Our: 38.84. e Degraded: 8.03. f [5]: 33.76. g [37]: 34.66. h Our: 35.28. i Degraded: 6.57. j [5]: 29.14. k [37]: 29.99. l Our: 30.45
be 25, 50, 65 when the noise standard deviation is 2, 5, 10, respectively. If the kernel type is the motion blur, the parameter μ is set to be 40, 50, 70 when the noise standard deviation is 2, 5, 10, respectively. BM3DDEB BM3DDEB [16] is an extension of the BM3D filter for deblurring problem. BM3DDEB has two steps as BM3D method. The first step gives an initial estimate via applying BM3D to remove the colored noise. The second step uses a regularized Wiener inversion to get the final deblurred image. We use the default parameter setting provided by the authors in their codes. i+1 i u −u For the proposed Algorithm 1, we use u i+1 2 < 10−3 as the stopping criterion of the 2 inner loop. We set N O = 50, N I = 50, c1 = 60, c2 = 5, θ = 0.4, α = 2. The patch size is set to be 6 × 6. The step between each key patch is set to be 4. The non-local patch search window is set to be 30. The number of similar patches is set to be 30. If the kernel type is the uniform blur or Gaussian blur, the parameter μ is set to be 150, 30, 10 when the noise standard deviation is 2, 5, 10, respectively. If the kernel type is the motion blur, the parameter μ is set to be 90, 25, 10 when the noise standard deviation is 2, 5, 10, respectively. In Tables 1, 2, and 3, we present the improvements of PSNR values obtained by each algorithm on all test images degraded by different blurs and different noise levels respectively. From these values we can see that the proposed algorithm achieves competitive results.
123
J Sci Comput
Fig. 8 Recovered results (with PSNR (dB) of different methods on image Cameraman corrupted by 7 × 7 uniform kernel and salt-and-pepper noise with noise level 30 % (first row), 50 % (second row), and 70 % (third row). a Degraded: 10.05. b [5]: 37.73. c [37]: 38.07. d Our: 38.97. e Degraded: 7.98. f [5]: 33.54. g [37]: 34.61. h Our: 35.32. i Degraded: 6.60. j [5]: 29.31. k [37]: 30.04. l Our: 30.64
Some visual quality of the recovered images can be evaluated from Figs. 3, 4, and 5. Obviously, the proposed algorithm preserves sharper image edges. Although providing better recovered images, the proposed method is computationally heavier: with our implementation, processing a 256 × 256 Cameraman image degraded with uniform blur and Gaussion noise with standard deviation 5 takes around 184 s, while Framelet, NLTV and BM3DDEB need 12.4, 75, 1.6 s respectively (all experiments realized using Matlab R2014a on a PC equipped with a 2.70 GHz CPU). It should be noted that, since WNNMG just runs for 1 iteration every loop in Algorithm 1, there is not much difference between the results of Algorithm 1 with WNNMG (25.84 dB) and WNNM (25.79 dB) respectively. However, Algorithm 1 with WNNM needs less computational time (164 s). The computation time cost of our algorithm could be reduced via implementing the most time costing step in C Programming Language like NLTV and BM3DDEB. Figure 6 gives the relative error between the successive iteration of the recovered image similar to [5,44], the error log(u − u ∗ 2 ) similar to [26] and PSNR similar to [6] at each iteration when deblurring the image Cameraman corrupted by uniform blur and Gaussian noise with standard deviation σ = 5. The results show that Algorithm 1 converges.
123
J Sci Comput
Next, we present some results of the proposed Algorithm 2. Figs. 7 and 8 give some comparisons of the deblurring results obtained by the Algorithm [5], the Algorithm [37] based on K-SVD method, and the Algorithm 2. It is observed that Algorithm 2 can achieve much better restoration quality when the noise level is high. This is because when the noise level is high, the learned over-complete dictionary is not very representative. However, the low rank based method can discover the underlying structure shared by similar patches. This makes the Algorithm 2 achieve state of the art results. In those experiments, we set N I = 500, c1 = 40, c2 = 2, α = 1, θ = 1, λ = 50,000. The other parameters are set to be the same value as that used in Algorithm 1.
6 Conclusion This paper firstly presented a closed form solution for the weighted nuclear norm minimization problem when the weights are in arbitrary order, then studied the image deblurring problem based on the weighted nuclear norm regularization and total variation regularization. The first regularization constrains the matrix constructed from stacking the similar patches together should be low rank. While the second regularization makes the recovered image locally smoothed. In the experiments, we show that the proposed deblurring algorithm is competitive to the compared methods. Since the low rank minimization based denoising and deblurring under Gaussian noise have obtained promising results, we also generalized it to study the deblurring under saltand-pepper noise. The resulting algorithm achieved state of the art results. In the future, we will investigate other image applications based on low rank minimization.
References 1. Aharon, M., Elad, M., Bruckstein, A.M.: The K-SVD: an algorithm for designing of overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54, 4311–4322 (2006) 2. Breiman, L.: Better subset regression using the nonnegative garrote. Technometrics 37, 373–384 (1995) 3. Buades, A., Coll, B., Morel, J.M.: A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 4, 490–530 (2005) 4. Cai, J., Candès, E., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20, 1956–1982 (2010) 5. Cai, J., Chan, R., Nikolova, M.: Fast two-phase image deblurring under impulse noise. J. Math. Imaging Vis. 36, 46–53 (2010) 6. Cai, J., Huang, S., Ji, H., Shen, Z., Ye, G.: Data-driven tight frame construction and image denoising. Appl. Comput. Harmon. Anal. 37, 89–105 (2014) 7. Cai, J., Osher, S., Shen, Z.: Split Bregman methods and frame based image restoration. Multiscale Model. Simul. 8, 337–369 (2010) 8. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009) 9. Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20, 89–97 (2004) 10. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2004) 11. Chantas, G., Galatsanos, N.P., Molina, R., Katsaggelos, A.K.: Variational Bayesian image restoration with a product of spatially weighted total variation image priors. IEEE Trans. Image Process. 19, 351–362 (2010) 12. Chan, T., Zhou, H.M.: Total variation wavelet thresholding. J. Sci. Comput. 32, 315–341 (2007) 13. Chen, K., Dong, H., Chan, K.S.: Reduced rank regression via adaptive nuclear norm penalization. Biometrika 100, 901–920 (2013)
123
J Sci Comput 14. Cho, T.S., Joshi, N., Zitnick, C.L., Kang, S.B., Szeliski, R., Freeman, W.T.: A content-aware image prior. In: Proceedings of CVPR, pp. 169–176 (2010) 15. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3D transform-domain collaborative filtering. IEEE Trans. Image Process. 16, 2080–2095 (2007) 16. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image restoration by sparse 3D transform domain collaborative filtering. SPIE Electronic Imaging (2008) 17. Danielyan, A., Katkovnik, V., Egiazarian, K.: BM3D frames and variational image deblurring. IEEE Trans. Image Process. 21, 1715–1728 (2012) 18. Daubechies, I., Teschkeb, G.: Variational image restoration by means of wavelets: simultaneous decomposition, deblurring, and denoising. Appl. Comput. Harmon. Anal. 19, 1–16 (2005) 19. Dong, Y., Hintermüler, M., Neri, M.: An efficient primal-dual method for L1-TV image restoration. SIAM J. Imaging Sci. 2, 1168–1189 (2009) 20. Effros, A., Leung, T.: Texture synthesis by non-parametric sampling. In: Proceedings of 7th IEEE ICCV, pp. 1033–1038 (1999) 21. Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15, 3736–3745 (2006) 22. Elad, M., Figueiredo, M., Ma, Y.: On the role of sparse and redundant representations in image processing. In: Proceedings of the IEEE Special Issue on Applications of Sparse Representation and Compressive Sensing, vol. 98, pp. 972–982 (2010) 23. Fazel, M.: Matrix rank minimization with applications. Ph.D. thesis, Stanford University (2002) 24. Gao, H.: Wavelet shrinkage denoising using the non-negative garrote. J. Comput. Graph. Stat. 7, 469–488 (1998) 25. Gilboa, G., Osher, S.: Nonlocal operators with applications to image processing. Multiscale Model. Simul. 7, 1005–1028 (2008) 26. Goldstein, T., Osher, S.: The split Bregman method for L1 regularized problems. SIAM J. Imaging Sci. 2, 323–343 (2009) 27. Gu, S., Zhang, L., Zuo, W., Feng, X.: Weighted nuclear norm minimization with application to image denoising. In: Proceedings of IEEE CVPR, pp. 2862–2869 (2014) 28. Hu, Z., Huang, J., Yang, M.: Single image deblurring with adaptive dictionary learning. In: Proceedings of IEEE ICIP, pp. 1169–1172 (2010) 29. Ji, H., Liu, C., Shen, Z., Xu, Y.: Robust video denoising using low rank matrix completion. In: Proceedings of IEEE CVPR, pp. 1791–1798 (2010) 30. Liu, G., Lin, Z., Yu, Y.: Robust subspace segmentation by low-rank representation. In: Proceedings of ICML, pp. 663–670 (2010) 31. Lou, Y., Bertozzi, A., Soatto, S.: Direct sparse deblurring. J. Math. Imaging Vis. 39, 1–12 (2011) 32. Lou, Y., Zhang, X., Osher, S., Bertozzi, A.: Image recovery via nonlocal operators. J. Sci. Comput. 42, 185–197 (2010) 33. Lu, C., Tang, J., Yan, S., Lin, Z.: Generalized nonconvex nonsmooth low-rank minimization. In: Proceedings of IEEE CVPR, pp. 4130–4137 (2014) 34. Lu, C., Zhu, C., Xu, C., Yan, S., Lin, Z.: Generalized singular value thresholding. In: Proceedings of AAAI (2015) 35. Ma, L., Ng, M., Yu, J., Zeng, T.: Efficient box-constrained TV-type-l 1 algorithms for restoring images with impulse noise. J. Comput. Math. 31, 249–270 (2013) 36. Ma, L., Moisan, L., Yu, J., Zeng, T.: A dictionary learning approach for Poisson image deblurring. IEEE Trans. Med. Imaging 32, 1277–1289 (2013) 37. Ma, L., Yu, J., Zeng, T.: Sparse representation prior and total variation-based image deblurring under impulse noise. SIAM J. Imaging Sci. 6, 2258–2284 (2013) 38. Maleki, A., Narayan, M., Baraniuk, R.G.: Anisotropic nonlocal means denoising. Appl. Comput. Harmon. Anal. 35, 452–482 (2013) 39. Neelamani, R., Choi, H., Baraniuk, R.: ForWaRD: Fourier wavelet regularized deconvolution for illconditioned systems. IEEE Trans. Image Process. 52, 418–433 (2004) 40. Olshausen, B., Field, D.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996) 41. Quan, Y., Ji, H., Shen, Z.: Data-driven multi-scale non-local wavelet frame construction and image recovery. J. Sci. Comput. (2014). doi:10.1007/s10915-014-9893-2 42. Ron, A., Shen, Z.: Affine system in L2(Rd): the analysis of the analysis operator. J. Funct. Anal. 148, 408–447 (1997) 43. Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60, 259–268 (1992)
123
J Sci Comput 44. Wen, Y., Ng, M., Huang, Y.: Efficient total variation minimization methods for color image restoration. IEEE Trans. Image Process. 17, 2081–2088 (2008) 45. Wu, C., Tai, X.-C.: Augmented Lagrangian method, dual Methods, and split Bregman iteration for ROF, vectorial TV, and high order models. SIAM J. Imaging Sci. 3, 300–339 (2010) 46. Xie, Q., Meng, D., Gu, S., Zhang, L., Zuo, W., Feng, X., Xu, Z.: On the optimal solution of weighted nuclear norm minimization. Technical Report (2014) 47. Yan, M., Yang, Y., Osher, S.: Exact low-rank matrix completion from sparsely corrupted entries via adaptive outlier pursuit. J. Sci. Comput. 56, 433–449 (2013) 48. Zeng, T., Malgouyres, F.: Matching pursuit shrinkage in Hilbert spaces. Sig. Process. 91, 2754–2766 (2011) 49. Zhang, X., Burger, M., Bresson, X., Osher, S.: Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imaging Sci. 3, 253–276 (2010) 50. Zhang, J.P., Chen, K., Yu, B.: An iterative Lagrange multiplier method for constrained total variation-based image denoising. SIAM J. Numer. Anal. 50, 983–1003 (2012) 51. Zoran, D., Weiss, Y.: From learning models of natural image patches to whole image restoration. In: Proceedings of ICCV, pp. 479–486 (2011)
123