Vis Comput DOI 10.1007/s00371-014-0920-y
ORIGINAL ARTICLE
Non-blind deblurring of structured images with geometric deformation Xin Zhang · Fuchun Sun · Guangcan Liu · Yi Ma
© Springer-Verlag Berlin Heidelberg 2014
Abstract Non-blind deconvolution, which is to restore a sharp version of a given blurred image when the blur kernel is known, is a fundamental step in image deblurring. While the problem has been extensively studied, existing methods have conveniently ignored an important fact that deformation can significantly affect the statistical characteristics of an image and introduce additional blurring effect. In this paper, we show how to enhance non-blind deconvolution by recovering and undoing the deformation while deconvolving a given blurred image. We show that this is the case for almost all popular regularizers that have been proposed for image deblurring such as total variation and its variants. We conduct extensive simulations and experiments on real images and verify that the incorporation of geometric deformation in deconvolution can significantly improve the final deblurring results. Combined with existing blur kernel estimation techniques, our method can also be used to enhance blind image deblurring. Keywords Non-Blind deconvolution · Geometric deformation · Total variation 1 Introduction In many applications of image processing, computer vision and graphics, we need to enhance the clarity of a given blurred X. Zhang (B) · F. Sun Department of Computer Science and Technology, Tsinghua University, Beijing, China e-mail:
[email protected] G. Liu University of Illinois at Urbana-Champaign, Champaign, USA Y. Ma Microsoft Research Asia, Beijing, China
image or images. This naturally leads to the challenging problem of image deblurring, which has been extensively studied in the literature for many years [8,15,16]. Numerous approaches have been proposed to solve different subproblems or variations of this task, including motion deblurring [1,3,4,7,10,11,18,19,23], non-blind deconvolution [6,24], and more recently blind deconvolution [2,12,13,20]. In this work, we shall focus on the non-blind deconvolution (i.e., non-blind deblurring) problem, with mild concerns to blind deconvolution in our experiments. In addition to the blurring effect that can be caused by motion or defocus, geometric deformation can also affect the sharpness of an image. As the example in Fig. 1 shows, if we apply a geometric transformation (say a homography) to a planar texture, the perspective deformation and down-sampling will introduce a non-uniform “blurring” effect on the resulting image. Conventional image deblurring and restoration methods often ignore such compounding effect of geometric deformation. In this work, we show that one can recover much higher-quality deblurred textures when the deformation is properly handled in the deblurring process. This is especially the case for structured images of building facades and texts that are rich of sharp edges. Note that the significance of this work is not limited to non-blind deconvolution—handling deformation properly in the recovery process is equally important for other image enhancement tasks such as denoising and superresolution.
1.1 Previous works on non-blind deconvolution Given an observed blurred image B, a popular way for interpreting the blurring effect is to model the blurred image as the linear convolution of a sharp image I with a blur kernel K :
123
X. Zhang et al.
Fig. 1 Illustrating the blurring effect caused by geometric deformations. a A sharp facade image of front view. b The deformed version of the facade. c The gradient distribution histograms of the sharp image and its deformed version. Notice that the gradient here is the image
gradient. It can be seen that the deformed version has much less sharp gradients than the original image, especially in the regions with large deformation, i.e., geometric deformation can suppress sharp edges and lead to a non-uniform blurring effect
B = I ⊗ K + N,
1.2 This work
(1)
where ⊗ denotes the convolution operator and N is the white noise. Then the goal of non-blind deconvolution is to solve the sharp image I when the blur kernel K is given. Although the blur kernel is known, the problem is still heavily underconstrained. Classical non-blind deconvolution algorithms such as Wiener filter [20,22] and Tikhonov regularization [9] require very precise knowledge about the blur kernel, and thus any error or noise in the blur kernel will introduce serious ringing artifacts in the recovered image. Recent works try to suppress the possible ringing artifacts by imposing additional constraints into the deconvolution procedure. For example, Yuan et al. [24] proposed a multiple bilateral Richardson-Lucy algorithm, Mignotte [14] adopted an adaptive edge preserving regularization approach, and Joshi et al. [5] used local color statistics to restrict the solution space of deconvolution. To resolve the ill-conditioned nature of problem (1) and suppress the possible ringing artifacts during the deconvolution procedure, a widely used approach is to solve the following regularized minimization problem: min B − I ⊗ K 2F + λ f (I ), I
(2)
where λ > 0 is a parameter for trading off between reconstruction error and regularization strength and F-norm · F is the Frobenius Norm. Generally, the regularizer f (·) plays a central role in deconvolution and should be designed carefully according to appropriate models of image priors. This leads to extensive investigations about the statistical characteristics of natural images (e.g., [6,21]). In image restoration problems (including deblurring), the most widely used prior is from the phenomenon that the natural images usually have sparse representations in the derivative domain, i.e., the image gradients are sparse, resulting in the wellknown total variation (TV) [17] criterion. In [2,6,7,11,23], a set of variations of TV are established and evaluated under the context of image deblurring. We shall detail them in Sect. 2.2.
123
Although it is generally accepted that the statistical characteristics of the image gradients play a critical role in deconvolution (and many other image restoration problems), as mentioned above, there is an important aspect ignored by the previous works. That is, the geometric deformations can suppress sharp gradients and thus lead to the blurring of images, as illustrated in Fig. 1. Furthermore, the statistical characteristics of image gradients can be changed by geometric deformation, as can be seen from Fig. 1c. This means that it is inaccurate to arbitrarily use the existing image prior models without considering the essential geometric structures of images. Previous methods, unfortunately, typically overlook this phenomenon. The phenomenon illustrated in Fig. 1 also means that the sharp gradients suppressed by geometric deformation can be restored by rectifying the deformed image.1 Motivated by this observation, we propose to enhance the goal of non-blind deconvolution: Unlike previous works, which merely target at a sharp version, we aim at restoring a both sharpened and rectified version for an observed blurred image. To this end, we represent the observed blurred image as the convolution of the blur kernel with a deformed sharp image. Here, unlike the previous model (1) which ignores the underlying image structures and geometric deformation, in our model the sharp rectified image is transformed by projective transformation before convolution. Given a blurred image and its blur kernel, we need to solve for the rectified sharp image. However, it is difficult to design an effective and elegant objective function for simultaneously inferring the rectified sharp image and the unknown transformation. Fortunately, we show empirically that the tool of transform invariant low-rank textures (TILT) [25] is robust to the blurring effect and can be used to estimate the transformation accurately from the blurred image. With the deformation 1
Here, the same as [25], “rectified image” is in a sense that the image is of front viewpoint and regular patterns.
Non-blind deblurring of structured images with geometric deformation
known, the rectified sharp image can then be recovered by solving an enhanced deconvolution problem, which is usually convex and efficiently solvable by gradient descent algorithms. Experimentally we test our approach under various conditions, e.g., different noise levels, accurate and/or inaccurate blur kernels, etc. Our results convincingly demonstrate that simultaneously undoing the deformation and deblurring can significantly improve the deblurring results. In summary, the contributions of this paper include the following: 1. We investigate how to handle the blurring effect caused by geometric deformation. To our knowledge, we are the first to investigate the effect of deformation in image deblurring. Notice that previous work like [19] is mostly about using a series of projective transformations to fit the trajectory of a camera motion, instead of modeling the deformation of structured images. 2. We provide a set of algorithms for enhanced nonblind deconvolution and experimentally demonstrate their advantages over existing algorithms. The rest of the paper is organized as follows. Section 2 introduces our approach for incorporating the geometric structures of images into the deconvolution procedure. Section 3 presents the experiments and results. Section 4 concludes this paper. 2 Enhancing non-blind deconvolution with perspective deformation To model the generative process of an observed blurred image, the previous model (1) and the associated regularization (2) are the most effective only when the structures in the image are precisely rectified. However, that is rarely the case in natural images, and thus it is more appropriate to represent a given blurred image B as the linear convolution of a blur kernel K with a sharp rectified image I deformed by certain transformation τ . More precisely, we have B = (I ◦ τ ) ⊗ K + N ,
(3)
where ◦ denotes a transform in the image domain, τ is the transformation parameter, and N models possible noise in the image. In this work, we assume that τ is a planar projective transformation, belonging to the homography group G L(2, R). So, in general τ has eight variables (here we assume the scale is fixed) and can model different types of transformations, including translation, rotation, skew, and perspective. Since 2D translations do not lead to geometric blurring effect, we ignore two translation variables and represent τ as a 6-dimensional vector. Note that as operators on the image, the domain transformation and the blur convolution do not commute: (I ◦ τ ) ⊗
Fig. 2 Non-uniform blurring effect introduced if undoing the transformation on the blurred image. Left: a sharp image with perspective deformation (I ◦ τ ). Middle: the blurred version of the sharp image (B). Right: the rectified version of the blurred image (B ◦ τ −1 )
K = (I ⊗ K ) ◦ τ . As a consequence, it would be incorrect to first rectify the blurred image and then conduct deconvolution, which would require us to assume a very different (indeed incorrect) generative model: B ◦ τ −1 = I ⊗ K + N . In fact, according to our model (3), when we undo the deformation τ on the blurred noisy image, we have B ◦ τ −1 = (I ◦ τ ) ⊗ K ◦ τ −1 + N ◦ τ −1 instead, which will result in non-uniform blurring effect as shown in Fig. 2. Moreover, even the deformation τ is known, it would be equally inaccurate to first try to compute a “deblurred” estimation D for I ◦ τ from B using K and then restore I as D ◦ τ −1 . This is because the statistical properties of I ◦ τ can deviate from those of I (see Fig. 1) and most image regularizers are the more effective on I . As our results will show, this difference can significantly affect the final deblurring results but has been very much ignored in most existing deblurring methods. In the following, we shall show how to recover the transformation parameter τ and then properly restore the rectified, sharp image I . 2.1 Estimating deformation by rank minimization To restore a rectified, sharp version from an observed blurred image, it is necessary to recover the transformation parameter τ . Nevertheless, as mentioned in the Introduction, it is not easy to design a criterion that can not only suppress the possible ringing artifacts but also benefit the recovery of τ . On the one hand, while the sparsity of image gradients is a good prior for suppress ringing artifacts, it cannot benefit the estimation of the geometric transformation τ . On the other hand, although the low-rank prior used in TILT is effective in estimating transformation, it tends to suppress sharp edges and favor blurred images. Fortunately, we observe that if the rectified and sharp image is supposed to be low-rank, the transformation parameter τ can be accurately recovered by TILT even from rather blurred images. Generally, TILT works as long as the nuclear norm of the image reaches minimal when the image is rectified. This is typically the case even when the image is blurred. To be more precise, we observe that B ◦ τˆ −1 typically reaches the minimal rank when τˆ ≈ τ even though (I ◦ τ ) ⊗ K ◦ τ −1 = I ⊗ K ! This fact is verified empirically
123
X. Zhang et al. skew
perspective 1
0.9
0.9
0.9
0.8
0.8
0.8
0.7 0.6 0.5 0.4
0.7 0.6 0.5 0.4
0.3 0.2 −60
nuclear norm
1
nuclear norm
nuclear norm
rotation 1
sharp image blurred image −40
−20
0
20
40
60
rotation angle
0.6 0.5 0.4
0.3 0.2
0.7
sharp image blurred image −0.4 −0.2
0
0.2
skew degree
0.4
0.6
0.3 0.2 −0.01
sharp image blurred image −0.005
0
0.005
0.01
perspective degree
Fig. 3 Plotting the nuclear norm of I ◦ τ as a function of τ for three types of typical deformations. The values shown in above figure are averaged from 100 images. It can be seen that the nuclear norm tends to
reach minimal at the same rectified pose as the sharp images, regardless the images are blurred or not. The only difference is the minima is less acute for the blurred ones
Fig. 4 Exemplifying that the blurring effect does not largely affect the estimation of the transformation parameter τ . a A deformed image. b The rectified version of the deformed image produced by TILT, where the estimated parameter τ = [1.0904, −0.1806, −0.4242, 0.9842, −0.0011, −0.0004]. c A blurred, deformed image. d The
rectified version of the blurred deformed image produced by TILT, where the estimated parameter τ = [1.0953, −0.1776, −0.4114, 0.9774, −0.0012, −0.0004]. Notice that the difference between those two estimates is very small
in Fig. 3 using nearly 100 images that reach minimal rank in its rectified pose. Therefore, we can obtain an accurate estimate of τ by directly applying TILT on the observed blurred image B, without computing the rectified sharp image I first. Namely, τ can be estimated by minimizing
where f (I ) is the same regularizer as discussed in (2) that encodes certain kind of natural image priors. The above optimization problem can be efficiently solved by a gradient descent scheme. Indeed, the problem is convex if f (I ) is a convex regularizer.
τˆ = arg min ||A||∗ + α||E||1 , s.t. B ◦ τ −1 = A + E, (4)
2.2.1 Gradient of the objective function
where τ −1 is the inverse of τ , · ∗ is the nuclear norm, i.e., sum of singular values of a matrix, and · 1 denotes the 1 norm of a matrix. Figure 4 further illustrates with a concrete example that TILT can accurately estimate the transformation parameter τ from a blurred image (of low-rank structures).
Let vec(·) be the vectorization of a matrix or image; then
τ,A,E
2.2 Non-blind deconvolution with deformation Provided that the transformation parameter τ is estimated, according to the model (3), the sharp and rectified image I can be recovered by minimizing 1 min (I ◦ τ ) ⊗ K − B2F + λ f (I ), I 2
123
(5)
vec((I ◦ τ ) ⊗ K ) = PK Pτ z
(6)
where z = vec(I ) is the vectorization of the image I , PK and Pτ are two matrices (i.e., linear operators) corresponding to the operators of K and τ , respectively. In this way, it is easy to see that 1 1 (I ◦ τ ) ⊗ K − B2F = PK Pτ z − b22 , 2 2
(7)
where · 2 denotes the 2 -norm of a vector, and b = vec(B) is the vectorization the observed image B. It is simple to
Non-blind deblurring of structured images with geometric deformation
calculate that the gradient of the above function (with respect to the variable z) is given by PτT PKT (PK Pτ z − b). According to [19], applying the operator PτT on vec(M) is equal to applying τ −1 on M, with M being any matrix. Also, it can be verified that PKT vec(M) = vec(M ⊗ K ∗ ) holds for any matrix M, where K ∗ is the conjugation operator of K . 2 Hence, the gradient of the objective function (5) is given by (((I ◦ τ ) ⊗ K − B) ⊗ K ∗ ) ◦ τ −1 + λ f (I ),
f (I ) =
I y I x f (I ) = div −α|I x |α−1 , − α|I y |α−1 . |I x | |I y | (14) 4. In [7], the authors tried to find a prior that can distinguish between sharp and blurred images, resulting in the socalled 1 /2 constraint:
2.2.2 Different image priors and their gradients f (I ) =
In this work, we are interested in TV and its variations, including the standard TV used in [2,17], the 1 prior evaluated in [23], the 1 /2 (i.e., normalized 1 ) prior established by [7] and the hyper Laplacian (HL) prior proposed in [6]. In the following, we shall detail them one by one:
|I x | + |I y | (I x )2 + (I y )2
⎛
(9)
f (I ) = div ⎝
−I x (I x )2 +(I y )2
,
−I y (I x )2 +(I y )2
⎞ ⎠,
(10) where div(·) denotes the divergence of a field. 2. The 1 constraint (i.e., sparsity prior) adopted by [23] is as follows: f (I ) = |I x | + |I y | dxdy. (11)
f (I ) = div
−I
−I y . |I x | |I y | x
,
+ ( I y
)2
+
(| I x | + | I y |) I x
By incorporating the above equations of f (I ) into (8), we obtain the gradient of the objective function (5) with various choices of f (I ). In this way, the enhanced non-blind deconvolution problem can be generally solved by gradient descent. Algorithm 1 summarizes the whole process of our algorithm. The most computationally expensive step in our algorithm is the convolution operator, which, fortunately, can be made very efficient by fast Fourier transform. Algorithm 1 Enhanced Non-Blind Deconvolution Input: blurred image B and blur kernel K . Estimate the transformation parameter τ by TILT. Solve problem (5) by gradient descent. Output: the sharp and rectified image I .
(12)
3. Krishnan et al. [6] observed that the gradients of natural images do not exactly follow the Laplacian distribution, but instead the hyper Laplacian (HL) distribution, resulting in the following regularizer: Suppose the size of K is m-by-n, the (i, j)th element of K ∗ is given by K ∗ (i, j) = K (m − i, n − j).
2
( I x
)2
where sign(·) is the signum function.
1: 2: 3: 4:
The corresponding gradient is computed by
sign( I x )
(16)
where I x and I y are the image gradients along x-axis and y-axis, respectively. The corresponding gradient is given by ⎛
(15)
, 3 (( I x )2 + ( I y )2 ) 2 ⎞ (| I x |+| I y |) I y sign( I y ) ⎠, + − 3 (( I x )2 + ( I y )2 ) 2 ( I x )2 + ( I y )2
1. The standard TV regularizer used in [2] is the quantity (I x )2 + (I y )2 dxdy,
dxdy.
The gradient of the above function is calculated by f (I ) = div ⎝−
f (I ) =
(13)
where α usually locates in the range of [0.5, 0.8]. The gradient of the above function is
(8)
where the computation of f (I ) depends on the choices of the regularizer f (I ).
|I x |α + |I y |α dxdy,
3 Experimental results In order to verify the significance of incorporating the geometric deformation in the deconvolution procedure, we perform a set of experiments under the cases where both K and τ are given, or K is given but τ is unknown, or both K and τ are unknown.
123
X. Zhang et al.
Fig. 5 The 4 sharp images and 8 blur kernels used for the experiments (with known K and τ )
(a)
(b)
(c)
18
l
T−l1
16
SNR
SNR
14 12 10
18
l /l
T−l1/l2
12
16
11
14 12
14
10
12
9
10
8
10
8
8
6
8
7
1
1.5
2
2.5
3
3.5
4
4.5
5
noise level
1
1.5
2
2.5
3
3.5
4
4.5
TV T−TV
1 2
SNR
16
(d)
13
HL T−HL
1
SNR
18
5
noise level
1
1.5
2
2.5
3
3.5
4
4.5
5
6
1
1.5
2
noise level
2.5
3
3.5
4
4.5
5
noise level
Fig. 6 Quantitive comparisons of deconvolution results, with known K and τ . a 1 versus T-1 . b HL versus T-HL. c 1 /2 versus T-1 /2 . d TV versus T-TV. The numbers shown in the above figure are averaged
from 32 trials. The parameter λ of all algorithms is manually tuned to the best at each noise level
3.1 Results with known K and τ
where I is the original sharp, rectified image (i.e., ground truth), Iˆ is the image recovered by an algorithm, and μ( Iˆ) denotes the average value of the intensities of Iˆ. We consider all the four regularizers discussed in Sect. 2.2, resulting in four baselines:3 TV, 1 , HL (with α = 2/3) and 1 /2 . Our enhanced algorithms are referred to as T-TV, T-1 , T-HL and T-1 /2 accordingly. Figure 6 shows quantitative comparison of the two methods. It can be seen that our method consistently and significantly outperforms the conventional approach in terms of SNR of the recovered images. Figure 7 shows some examples, from which one can see that images recovered by our method is much shaper. This comparison clearly testifies the significant advantages of incorporating deformation of images in the deconvolution procedure.
As mentioned in the second paragraph of Sect. 2.1, a conventional way is to obtain a “deblurred” estimation D for I ◦ τ first using existing non-blind deconvolution algorithms to directly deblur B using K , and then restore the rectified, sharp image I as D ◦ τ −1 . Here, unlike our approach that incorporates the transformation parameter into the deconvolution procedure, the rectification process is performed after deconvolution. We use this as a natural baseline to compare results from our method. We use a test set of 4 sharp rectified images I and 8 blur kernels K shown in Fig. 5. Thus, we create a total of 32 blurred and deformed images according to the model (3). The transformation parameters are randomly generated, and the noise N is simulated as i.i.d Gaussian noise with zero mean and standard deviation 0.1σ (σ stands for the noise level). For fair comparison, in this experiment we assume that both the blur kernel K and the transformation τ are known, and adopt the widely-used signal-to-noise ratio (SNR) metric to measure the quality of deconvolution results: SNR = 10 log10
Iˆ − μ( Iˆ)2F , Iˆ − I 2 F
123
(17)
3.2 Results with known K and unknown τ To verify the effectiveness of our enhanced non-blind deconvolution Algorithm 1, we experiment with some naturally 3
To ensure the fair of comparison, all baselines are also implemented by using gradient descent to solve (2).
Non-blind deblurring of structured images with geometric deformation Fig. 7 Comparisons of deconvolution results, with known K and τ . Left: the input blurred image. Middle: the result of direct deconvolution followed by rectification. Right: result of our method. In these experiments, the image regularizer f (I ) is chosen as HL
deformed images downloaded from Web. The blurring effect is manually created by using the blur kernels shown in Fig. 5. For the competing baseline, the result is rectified with the same estimated τ for our method. Figure 8 shows some examples. It can be seen that the images recovered by our
algorithms are sharper than those of the conventional method, especially at image regions rich in sharp edges (the difference is best seen in the electronic version). These results verify the effectiveness of our approach in practical conditions where the transformation τ is unknown.
123
X. Zhang et al.
Fig. 8 Comparisons of deconvolution results, with known K and unknown τ . Left: the input blurred image. Middle: result of direct deconvolution followed by rectification. Right: result of our method. In these experiments, the image regularizer f (I ) is chosen as HL Fig. 9 Comparisons of the deconvolution results, with unknown K and τ . Left: the input blurred image. Middle: the result of direct deconvolution followed by rectification. Right: result of our method. In these experiments, the image regularizer f (I ) is chosen as HL
3.3 Results with unknown K and τ In practice, the blur kernel might be unknown either and need to be estimated by non-blind deconvolution algorithms. Since the estimated blur kernel might be inaccurate, it is necessary to examine the performance of our approach in this case. In this paper, we simply adopt the publicly available algorithm described in [23] to estimate the blur kernel and then produce the final deconvolution results by our Algorithm 1. Figure 9 shows some examples, which show that there are noticeable improvements in our results over those of the conventional baseline (the difference is best seen in the electronic version). These experiments also show that our method can work with existing blur kernel estimation techniques (given in almost all blind deblurring methods) and our method can help improve the deblur-
123
ring results even the blur kernel might not be particularly accurate.
4 Conclusions In this paper, we have studied the non-blind deconvolution problem in the cases where the images undergo geometric deformation. We have provided ample evidences showing that especially for structured images, the deformation can significantly affect the deblurring results if not properly handled. We propose a two-stage approach to properly incorporate deformation in the deconvolution process. First, based on structures of the image, we estimated the underlying transformation parameter by TILT, which is insensitive to blurring effect. Second, based on the estimated transformation para-
Non-blind deblurring of structured images with geometric deformation
meter, we recovered a sharpened and rectified version for a given blurred image by solving an enhanced deconvolution problem. Extensive experiments verified that it can produce significant improvements in the deblurring results over conventional approaches. Acknowledgments X. Zhang and F. Sun are supported by the National Basic Research Program (973 Program) of China (No. 2013CB329403). Yi Ma is partially supported by the funding of ONR N00014-09-1-0230, NSF CCF 09-64215, NSF IIS 11-16012.
References 1. Cai, J.-F., Ji, H., Liu, C., Shen, Z.: Blind motion deblurring from a single image using sparse approximation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 104–111 (2009) 2. Chan, T.F., Wong, C.-K.: Total variation blind deconvolution. IEEE Trans. Image Process. 7(3), 370–375 (1998) 3. Cho, S., Lee, S.: Fast motion deblurring. ACM Trans. Graphics 28(5), 145-1–145-8 (2009) 4. Fergus, R., Singh, B., Hertzmann, A., Roweis, S.T., Freeman, W.: Removing camera shake from a single photograph. ACM Trans. Graphics 25, 787–794 (2006) 5. Joshi, N., Zitnick, C.L, Szeliski, R., Kriegman, D. J.: Image deblurring and denoising using color priors. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1550–1557 (2009) 6. Krishnan, D., Fergus, R.: Fast image deconvolution using hyperLaplacian priors. In: Neural Information Processing Systems, pp. 1033–1041 (2009) 7. Krishnan, D., Tay, T., Fergus, R.: Blind deconvolution using a normalized sparsity measure. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 233–240 (2011) 8. Kundur, D., Hatzinakos, D.: Blind image deconvolution. IEEE Signal Process. Mag. 13(3), 43–64 (1996) 9. Landi, G., Piccolomini, E.L.: An improved Newton projection method for nonnegative deblurring of Poisson-corrupted images with Tikhonov regularization. Numer. Alg. 60(1), 169–188 (2012) 10. Levin, A.: Blind motion deblurring using image statistics. In: Neural Information Processing Systems, pp. 841–848 (2006) 11. Levin, A., Fergus, R., Durand, F., Freeman, W.T.: Image and depth from a conventional camera with a coded aperture. ACM Trans. Graphics 26(3), 70 (2007) 12. Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Understanding blind deconvolution algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2354–2367 (2011) 13. Liu, G., Ma, Y.: Blind image deblurring by spectral properties of convolution operators. (2012), Arxiv, abs/1209.2082 14. Mignotte, M.: A segmentation-based regularization term for image deconvolution. IEEE Trans. Image Process. 15(7), 1973–1984 (2006) 15. Nagy, J.G., Palmer, K., Perrone, L.: Iterative methods for image deblurring: a Matlab object-oriented approach. Numer. Alg. 36(1), 73–93 (2004) 16. Richardson, H.W.: Bayesian-based iterative method of image restoration. J. Opt. Soc. Am. 62(1), 55–59 (1972) 17. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60(1–4), 259–268 (1992)
18. Shan, Q., Jia, J., Agarwala, A.: High-quality motion deblurring from a single image. ACM Trans. Graphics 27(3), 73-1–73-10 (2008) 19. Tai, Y.-W., Tan, P., Brown, M.S.: Richardson-Lucy deblurring for scenes under a projective motion path. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1603–1618 (2011) 20. Tsumuraya, F.: Deconvolution based on the Wiener-Lucy chain algorithm: an approach to recover local information losses in the deconvolution procedure. J. Opt. Soc. Am. A 13(7), 1532–1536 (1996) 21. Weiss, Y., Freeman, W.T.: What makes a good model of natural images? In: IEEE Conference on Computer Vision and Pattern Recognition (2007) 22. Wiener, N.: Extrapolation, interpolation, and smoothing of stationary time series. The MIT Press (1964) 23. Xu, L., Jia, J.: Two-phase kernel estimation for robust motion deblurring. In: European Conference on Computer Vision, pp. 157–170 (2010) 24. Yuan, L., Sun, J., Quan, L., Shum, H.-Y.: Progressive Interscale and intra-scale non-blind image deconvolution. ACM Trans. Graphics 27(3), 74-1–74-10 (2008) 25. Zhang, Z., Ganesh, A., Liang, X., Ma, Y.: TILT: transform invariant low-rank textures. Int. J. Computer Vis. 99(1), 1–24 (2012)
Xin Zhang was born in ShaanXi, China. She received her bachelor’s degree in 2005. She is now a PhD student in department of computer Science and technology in Tsinghua University, China. Currently, she is working on computer vision, image processing, and patten recognition.
Fuchun Sun is currently a Professor with the Department of Computer Science and Technology. His research interests include intelligent control, networked control system and management, neural networks, fuzzy systems, nonlinear systems, and robotics. Dr. Sun was the recipient of the Excellent Doctoral Dissertation Prize of China in 2000 and the Choon-Gang Academic Award by Korea in 2003, and was recognized as a Distinguished Young Scholar in 2006 by the National Science Foundation of China. He has been a member of the Technical Committee on Intelligent Control of the IEEE Control Systems Society since 2006.
123
X. Zhang et al. Guangcan Liu received her B.S. degree in mathematics and Ph.D. degree in computer science and engineering from Shanghai Jiao Tong University, Shanghai, China, in 2004 and 2010, respectively. Between 2006 and 2009, he was a visiting student with the Visual Computing Group, Microsoft Research Asia. He is currently a Postdoctoral Research Fellow with the Department of Electrical and Computer Engineering, National University of Singapore, Singpore. His research interests include machine learning and computer vision.
123
Yi Ma received two Bachelors degree in Automation and Applied Mathematics from Tsinghua University, Beijing, China, in 1995. He received an Master’s degree in Electrical Engineering and Computer Sciences (EECS) in 1997, a second Master degree in Mathematics in 2000, and the Ph.D. degree in EECS in 2000, all from the University of California at Berkeley. He is currently an associate professor (with tenure) at the Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, and since January 2009 has also served as research manager for the Visual Computing Group at Microsoft Research Asia, Beijing, China.