Circuits Syst Signal Process DOI 10.1007/s00034-013-9714-0
A New Algorithm Based on Linearized Bregman Iteration with Generalized Inverse for Compressed Sensing Tiantian Qiao · Weiguo Li · Boying Wu
Received: 7 January 2013 / Revised: 8 November 2013 © Springer Science+Business Media New York 2013
Abstract This paper aims to solve the basis pursuit problem minu∈Rn {u1 : Au = f } in compressed sensing when A is any matrix, there are two contributions in this paper. First, we provide a simplified iterative formula that combines original linearized Bregman iteration together with a soft threshold and iterative formula for generalized inverse of matrix A ∈ Rm×n . Furthermore, we also discuss its convergence. Compared to the original linearized Bregman method, the proposed simplified iterative scheme possesses obvious advantage. The workload and computing time are greatly reduced, when m n and rank(A) m, especially. But the requirement of high accuracy cannot be achieved. So, we need to select the proper number of inner loop to achieve the goal of balancing the workload and the accuracy of the simplified iterative formula. Second, we propose a new chaotic iterative algorithm based on the simplified iteration. Under the same iterations, the computing time of the simplified iteration with q = 1 (q is the number of inner loops) is almost the same as that of the chaotic method, but the precision of the latter is better than that of the former because it utilized more information; and the accuracy of the chaotic method achieves that of A† linearized Bregman iteration, while the computing time of the former is less than one half of the latter. In conclusion, the calculating efficiency of our two methods as
This research was partly supported by Fund of Oceanic Telemetry Engineering and Technology Research Center, State Oceanic Administration (Grant No. 2012003), NSFC(61101208, 61002048) and the Fundamental Research Funds for the Central Universities (Grant No. 13CX02086A).
B
T. Qiao ( ) · W. Li College of Science, China University of Petroleum, Qingdao 266580, PR China e-mail:
[email protected] W. Li e-mail:
[email protected] B. Wu Department of Mathematics, Harbin Institute of Technology, Harbin 150001, PR China e-mail:
[email protected]
Circuits Syst Signal Process
regards A† iteration is improved, and specially the chaotic iteration is more competitive. Numerical results on compressed sensing are presented that demonstrate that our methods can be significantly more effective than the original linearized Bregman iterations, even when matrix A is ill-conditioned. Keywords Chaotic iterative · Generalized inverse · Linearized Bregman iteration · Compressed sensing · Sparse solution
1 Introduction The basis pursuit problem [6, 13] solves the constrained minimization problem minn u1 : Au = f
u∈R
(1)
which is to find an l1 -minimal solution of the linear equations Au = f , where A ∈ Rm×n ,f ∈ Rm and u ∈ Rn . When A is any matrix and m < n, there may not necessarily exist a solution for the system of linear equations Au = f , and measurement f may contain noise in certain applications, then we need to compute its least square solution. Throughout this paper, we consider A ∈ Rm×n , m n is any matrix, unless otherwise specified. The basis pursuit problem (1) arises in the applications of compressed sensing. In particular, there has been a recent burst of research in compressive sensing, which involves solving problem (1). This was led by Candes et al. [4, 5], Donoho [7] and others, see [1, 2, 8, 9, 17]. The theory of compressed sensing guarantees that the sparsest solution of Au = f can be obtained by solving (1) under certain conditions on the dense matrix A and the sparsity of u. However, the general linear programming algorithms of solving this problem do not exist. Therefore, in [17] linearized Bregman iteration with soft threshold is proposed and in [1, 2, 12] its correlative convergence is analyzed. Unfortunately, just the results for A are surjective. Subsequently, in [3] Cai et al. discussed linearized Bregman iteration with soft threshold when A is any matrix and its property. Recently, Zhang et al. in [18] gave the available result, and it inspired our iterative scheme. In this paper, we propose a simple strategy and a new chaotic algorithm for the basis pursuit problem (1) in compressed sensing. They are based on linearized Bregman iteration together with a soft threshold and iterative formula for generalized inverse of A ∈ Rm×n action on a vector, when A is any matrix. Moreover, we also obtain satisfying results in theory and practice. The rest of the paper is organized as follows. In Sect. 2, we summarize the existing methods for solving the constrained problem (1) and some background about generalized inverse. The simple algorithm based on linearized Bregman iteration with iterative formula for generalized inverse action on a vector is described and convergence behavior is also analyzed in Sect. 3. The new chaotic iteration is presented in Sect. 4. Corresponding numerical results are shown in Sect. 5. Finally, we draw some conclusions in Sect. 6.
Circuits Syst Signal Process
2 Preliminaries 2.1 Generalized Inverse We are interested in the iterative formula of the generalized inverse and its convergent results, because it is used by our new algorithm. Therefore, before detailed discussion, we first give these definitions and lemmas as follows: Definition 1 [15] Let A ∈ Cm×n , then X is called the pseudoinverse of A and denoted by A† . Let X satisfy the following properties, i.e., the Moore–Penrose conditions: 1. AXA = A. 2. XAX = X.
(2)
3. (AX)∗ = AX. 4. (XA)∗ = XA.
Definition 2 [15] If there exists a matrix V ∈ Cn×m satisfying condition 1 of (2), then V is called the inner inverse of A or {1} inverse of A. Remark 1 Inner inverse is not unique. In general, the set of the inner inverse of the matrix A is denoted as A− . Definition 3 [15] Let A, B ∈ Cn×m , the set μ(A, B) = X | X = AY B, Y ∈ Cm×n
(3)
is called the Range of (A, B). Lemma 1 [14] Let A ∈ Cm×n = 0 and initial matrix V0 satisfy V0 ∈ μ A∗ , A∗ ,
(4)
ρ(I − AV0 ) < 1,
(5)
where I is a m × m identity matrix, ρ(A) is the spectral radius of matrix A. Then sequence {Vq }q∈N generated by Vq+1 = Vq + V0 (I − AVq ),
q = 1, 2, . . .
(6)
is convergent to A† . Lemma 2 [14] For any A ∈ Cm×n , then − A∗ A A∗ A A∗ = A∗ .
(7)
Circuits Syst Signal Process
2.2 Linearized Bregman Iteration First, the concept of Bregman distance is given. The Bregman distance associated with a convex function J at the point v is p
DJ (u, v) = J (u) − J (v) − p, u − v, where p is in the subgradient of J at v. This is not a distance in the usual sense because it is not in general symmetric. However, it does measure closeness in the sense p p p that DJ (u, v) ≥ 0, and DJ (u, v) ≥ DJ (w, v) for w on the line segment between u and v. With the concept of distance, in [1] to solve (1) the linearized Bregman iteration is generated by ⎧
1 ⎪ pk k+1 k ⎪ u − uk − δAT Auk − f 2 , u u, u + = arg min μD ⎪ J ⎨ u 2δ (8) ⎪ 1 T k k 1 k+1 ⎪ k+1 k k k ⎪ u =p − − u − A Au − f , p ∈ ∂J u , ⎩p μδ μ where J (u) = u1 , δ is a constant and given p 0 = u0 = 0. Hereafter, we use · ≡ · 2 to denote the l2 norm. Moreover, the limit of {uk }k∈N is the unique solution of
1 minn μu1 + u2 : Au = f . (9) u∈R 2δ Though (9) is not the same as (1), yet (9) approximates (1) with μ → ∞. Algorithm (8) can be rewritten as
v k+1 = v k + AT f − Auk , uk+1 = δTμ v k+1 ,
(10)
where u0 = v 0 = 0, and T Tλ (ω) := tλ ω(1) , tλ ω(2) , . . . , tλ ω(n) is the soft thresholding operator [17] with
0, |ξ | ≤ λ, tλ (ξ ) = sgn(ξ )(|ξ | − λ), |ξ | > λ.
(11)
(12)
For A is any matrix, the convergence of (10) is proved in [3]. On this basis, in [3] Cai et al. generalize the algorithm (10) to
f k+1 = f k + f − Auk , uk+1 = δTμ A† f k+1 ,
(13)
Circuits Syst Signal Process
where A† is Moore–Penrose generalized inverse matrix. And they prove the convergence of (13) when A is subjective. Subsequently, in [18] Zhang et al. prove the convergence of (13) when A is any matrix. Equation (13) relative to (10), the advantage of (13) is that it is to solve the precondition version of the equation; it cannot only improve the convergence speed but also can be used where an equation is illconditioned. Specifically, readers can see the proof of Theorem 2.3 in [3] and its comment. Theorem 1 [3] Let A ∈ Rm×n , m ≤ n, be an arbitrary matrix. Then sequence {uk }k∈N generated by (10) with 0 < δ < AA2 T converges to the unique solution of
1 minn μu1 + u2 : u = arg minn Au − f 2 . u∈R u∈R 2δ
(14)
Furthermore, when μ → ∞, the limit of (10) tends to the solution of minn u1 : u = arg minn Au − f 2 ,
u∈R
u∈R
(15)
convergent to the minimal l2 -norm among all the solutions of (15). Theorem 2 [3] Assume that A ∈ Rm×n , m ≤ n, is surjective. Then the sequence {uk }k∈N generated by (13) with 0 < δ < 1 converges to the unique solution of (9), or equivalently,
1 † 2 (16) min μu1 + u − A f : Au = f . u∈Rn 2δ Furthermore, when μ → ∞, the limit of (13) tends to the solution of (1) that is closest to the minimal l2 -norm solution of Au = f among all the solution of (1). Theorem 3 [18] Let A ∈ Rm×n , m ≤ n, be an arbitrary matrix. Then the sequence {uk }k∈N generated by (13) with 0 < δ < 1 converges to the unique solution of (14). Furthermore, when μ → ∞, the limit of {uk }k∈N tends to the solution of (15), which has a minimal l2 norm among all the solution of (15).
3 Simplified Linearized Bregman Method with Iterative Formula for A† For (13), we notice that computation of A† involves decomposition of singular value in MATLAB. Therefore, we consider another computing way of A† . Solving generalized inverse by iterative method is an effective computing manner, therefore we want to combine (13) with (6). But computing generalized inverse by iterative method involves operation of matrix product, so workload is relatively great. Since A† f k+1 in (13) is actually a vector, thus we propose a new iteration, which just needs a product of matrix and vector.
Circuits Syst Signal Process
Iterative Scheme 1: ⎧ k+1 f = f k + f − Auk , ⎪ ⎪ ⎨ y k,q+1 = y k,q + y 0 − V0 Ay k,q , ⎪ ⎪ ⎩ k+1 u = δTμ y k+1 ,
(17)
where y k,0 = y k , q = 1, 2, . . . , lk − 1, y k+1 = y k,lk , k = 1, 2, . . . . Obviously, construction of (17) is inner-outer circular. Inner iterative is obtained by product of f k+1 and every term of (6), i.e. set y k,q = Vq f k+1 . Outer iterative is y k+1 instead of A† f k+1 in (13). According to requirement of accuracy, we can choose iterative number of inner circular. To explain rationality and availability of the algorithm, we will give its convergence analysis. First, we prove the convergence theorem of iterative formula for A† . The ideas of the proof are similar to Theorem 2.3 in the paper [11]. Theorem 4 Let A ∈ Cm×n is any matrix and A = 0, if initial value V0 = αA∗ and 0 < α < 22 , where σ1 (A) = A2 and A∗ is the conjugate transpose matrix of A, σ1 (A)
then sequence {Vq }q∈N generated by (6) converges to A† . Proof Since V0 = αA∗ , let Y = αA(A∗ A)− , it is easy to see V0 = A∗ Y A∗ ∈ μ(A∗ , A∗ ) by Lemma 2. 2 ∗ For 0 < α < A 2 , the eigenvalues of matrix αAA satisfies 0 ≤ λj αAA∗ < 2, j = 1, 2, . . . , m,
(18)
and λj (αAA∗ ) are all semi-simple. Obviously, there exists an unitary matrix Q such that 0m−r 0 ∗ ∗ , (19) QAA Q = 0 Λr where Λr = diag(σ12 , σ22 , . . . , σr2 ), σj2 > 0, j = 1, 2, . . . , r. Because αAA∗ ≥ 0, thus I ≥ I − αAA∗ . Again from eigenvalues of I just are one, then nonzero eigenvalues of I − AV0 just are some values among 1 − ασj2 , j = 1, 2, . . . , r. Thus by |1 − ασj2 | < 1, we obtain ρ(I − AV0 ) < 1. From Lemma 1, we get our conclusion. Subsequently, we prove the convergence theorem in the following. Theorem 5 Let A ∈ Rm×n , m < n be an arbitrary matrix, initial value V0 = αAT , 2 k 0 < α < A 2 . Then the sequence {u }k∈N generated by (17) with 0 < δ < 1 converges to the unique solution of (14). Furthermore, when μ → ∞, the limit of the {uk }k∈N tends to the solution of (15), which has a minimal l2 norm among all the solutions of (15).
Circuits Syst Signal Process
Proof Compare iterative formula (13) with (17), and by the conclusion of Theorem 1, to obtain convergence of (17) we only need to prove k,q+1 y (20) − A† f k+1 → 0, as k → ∞, q → ∞. Since y k,q = Vq f k+1 , q = 0, 1, 2, . . . , then y k,q+1 = y k,q + y 0 − V0 Ay k,q
(21)
Vq+1 f k+1 = (Vq + V0 − V0 AVq )f k+1 .
(22)
k,q+1 y − A† f k+1 = Vq+1 f k+1 − A† f k+1 = Vq + V0 − V0 AVq − A† f k+1 ≤ Vq + V0 − V0 AVq − A† f k+1 .
(23)
is equivalent to
Thus
According to Theorem 4, we know Vq + V0 − V0 AVq − A† → 0,
as q → ∞,
(24)
2 where V0 = αAT , 0 < α < A 2. Again 0 < δ < 1, from the convergence analysis of Theorem 1, we know that the sequence {f k+1 } is bounded as k → ∞. Therefore, convergence result of the sequence {uk }k∈N generated by (17) is proved.
The simplified strategy can effectively avoid matrix multiplication and singular value decomposition, so it accelerates the speed. At the same time, the iterative algorithm also preserves the advantages of linearized Bregman iteration. Considering the requirements of the computation efficiency, it is not advisable to choose too many times of inner circular. Therefore, the requirement of high accuracy cannot be achieved.
4 New Chaotic Iterative Algorithm From the third section, we know that the simplified strategy is not a high precise algorithm. Because it is noticeable that in Iterative Scheme 1 the information of {f k } is not adequately used when {y k,q } is iterated. We fail to make the best use of {y k,q } when it applies to the iteration of A† solely. At the same time, we consider the advantage of the chaotic iterative method, therefore we may consider the format of iteration as follows.
Circuits Syst Signal Process
Iterative Scheme 2: ⎧ k+1 f = f k + f − Auk , ⎪ ⎪ ⎨ y k+1 = y k + V0 f k+1 − V0 Ay k , ⎪ ⎪ ⎩ k+1 u = δTμ y k+1 ,
k = 0, 1, 2, . . . ,
(25)
where y 0 = V0 f 0 . In fact, y k+1 = y k + V0 f k+1 − V0 Ay k = (I − V0 A)y k + V0 f k+1 = (I − V0 A)2 y k−1 + (I − V0 A)V0 f k + V0 f k+1 .. . = (I − V0 A)k+1 V0 f 0 + · · · + (I − V0 A)V0 f k + V0 f k+1 ,
(26)
i.e. T y k+1 = (I − V0 A)k+1 V0 , (I − V0 A)k V0 , . . . , V0 f 0 , f 1 , . . . , f k+1 ,
(27)
and from (6), we obtain Vk+1 = (I − V0 A)k+1 V0 + (I − V0 A)k V0 + · · · + V0 .
(28)
So, compare (27), (28) of Iterative Scheme 2 with (21), (22) of Iterative Scheme 1, we can notice that just f k+1 is used in every inner iterative of Iterative Scheme 1, but when update y k+1 in Iterative Scheme 2 we use f 0 , . . . , f k in former iterative and f k+1 , in order to keep more information of the original signal. At the same time, Iterative Scheme 2 just need the produce of matrix and vector. Thus, we have reason to believe that Iterative Scheme 2 is a fast and more accurate method.
5 Numerical Results In this section, we test Iterative Scheme 1 when q = 1, 2 and Iterative Scheme 2 in the basis pursuit problem (1), in which the constraint Au = f is an underdetermined linear equation and f is generated from a sparse signal. Here, the quality of restoration is measured by the signal-to-noise ratio (SNR), defined by u , SNR(u) := 20 log10 u − u ˜ where u is restored signal and u˜ is clear signal. Our code is written in MATLAB and run on a Windows PC with a Intel(R) Core(TM) 2 Duo CPU E7200 @2.53 Hz, 2.53 Hz and 1 GB memory. The MATLAB version is 7.1.
Circuits Syst Signal Process Fig. 1 (m, n) = (250, 500)
Fig. 2 Measurement signal m = 250
The stochastic matrix A ∈ Rm×n , m n with rank(A) = r m by interior function randn(·) in MATLAB. nNZ is defined as the number of nonzeros in u, and kmax is 2 maximum of iteration number. 0 < α < A 2 is obtained according to each stochastic
matrix A and initial V0 = αAT . We adopt the algorithm (17) and (25) to solve the problem (1), as well as plot figures corresponding to the signal generated by (17), (25) and the signal based on pinv(A). Set kmax = 200, μ = 5, δ = 0.9, u0 = 0, f 0 = 1 0, α = A 2 , where selection method of the parameters μ, δ can also be referred to [10, 16]. The clear stochastic signals with parameters n = 500, m = 250, nNZ = 30, r = 200, the signal reconstructed by Iterative Scheme 1 when q = 1, 2, by (13) based on pinv(A) and by Iterative Scheme 2 are shown in Fig. 1, respectively. Moreover, the measurement signal f is plotted in Fig. 2, where f = Au + , ∈ N (0, σ 2 ), σ is the standard deviation and σ = 0.5. Subsequently, we choose A in large size n = 5000, m = 1000, nNZ = 50, r = 500. Corresponding stochastic signals, the signal reconstructed by Iterative Scheme 1 when q = 1, 2, by (13) based on pinv(A) and by Iterative Scheme 2 are displayed in Fig. 4. The measurement signal f is plotted in
Circuits Syst Signal Process Fig. 3 Logarithmic iterative error curve (m, n) = (250, 500)
Fig. 4 (m, n) = (1000, 5000)
Fig. 5, where σ = 1. In addition, the iteration vs. logarithmic iterative error curves are drawn in Figs. 3 and 6. Finally, for two groups of parameters, we randomly choose ten groups of different numerical experiments, and calculate average value listed in Tables 1 and 2, respectively. From Tables 1 and 2, we can see that restored effect by the A† algorithm and the Iterative Scheme 2 are almost the same, but the computation time of the A† method is two times more than that of the Iterative Scheme 2, because the former involves computing the SVD decomposition, and the Iterative Scheme 2 is just the matrix-vector product operations. While the computation time of the Iterative Scheme 1 when q = 1 and the Iterative Scheme 2 is about the same, but recovery by the Iterative Scheme 2 is better than the Iterative Scheme 1. Increasing the number of inner loop of the Iterative Scheme 1 can improve the recovery effect, but the computational efficiency is reduced, so the Iterative Scheme 1 needs to select the proper q. Summary, the relationship of all method is about: Scheme 2 < Scheme 1 (q = 1) < Scheme 1 (q = 2) A† ; and the SNR of all
Circuits Syst Signal Process Fig. 5 Measurement signal m = 1000
Fig. 6 Logarithmic iterative error curve (m, n) = (1000, 5000)
method is about: Scheme 1 (q = 1) < Scheme 1 (q = 2) A† ≈ Scheme 2. These conclusions above can be seen from Figs. 1, 3 and Figs. 4, 6. In fact, the complexity analysis also shows comparative results of several methods. Set the same loop number is K. So, the workload of the A† algorithm (13) is two parts. They are the workload of the A† and the loop of (13). The workload is O(n3 ) during the computation of A = U SV , A† = V T S † U T , when m < n, because of the singular value decomposition involving multiplication of the matrix and matrix and eigenvalue calculation. And the workload of the loop of (13) is O(m × n × K), because the loop only contains multiplication of matrix and vector. Therefore, the total workload of the A† algorithm (13) is O(n3 ) + O(m × n × K). The workload of Iterative Scheme 1 when q = 1, 2 and Iterative Scheme 2 is O(m × n × K), respectively. Obviously, K < m n, the workload of the A† algorithm (13) is the biggest than the other three algorithms. And the main workload is calculation of the generalized inverse matrix A† .
Circuits Syst Signal Process Table 1 Contrast data table (m, n) = (250, 500), k < kmax is the stopping criterion m = 250, n = 500, rank(A) = 200, nNZ = 30, kmax = 200 pinv(A)
Scheme 1 (q = 1)
Scheme 1 (q = 2)
Scheme 2
σ =0
SNR Time (s) Error
38.25744135 0.40744367 0.01555434
16.33948184 0.15282872 0.15181895
19.55722011 0.21326400 0.10651450
32.75212411 0.16826368 0.02557393
σ = 0.2
SNR Time (s) Error
42.91406253 0.37443902 0.00937770
14.76382857 0.15343161 0.17372357
18.77283861 0.21316273 0.11849175
37.23439592 0.16021463 0.01568552
σ = 0.5
SNR Time (s) Error
36.57489762 0.37403540 0.01540522
15.91905881 0.15281869 0.15833765
20.07690254 0.21585070 0.10490159
35.34649862 0.16053926 0.01789964
σ = 0.8
SNR Time (s) Error
34.82568495 0.37629337 0.01821035
13.81410770 0.15287011 0.19080459
17.38265927 0.21312779 0.13435456
34.50630173 0.16175015 0.01946274
σ =1
SNR Time (s) Error
32.33635697 0.37296754 0.02495499
16.86444278 0.15103004 0.14250037
19.21711867 0.20646684 0.11022071
32.16857153 0.15379931 0.02585409
σ =2
SNR Time (s) Error
24.10365062 0.37256139 0.06199674
15.86156802 0.15260778 0.15745837
19.59914890 0.20882352 0.10590167
27.72764270 0.15796686 0.04179949
Table 2 Contrast data table (m, n) = (1000, 5000), k < kmax is the stopping criterion m = 1000, n = 5000, rank(A) = 500, nNZ = 50, kmax = 200 pinv(A)
Scheme 1 (q = 1)
Scheme 1 (q = 2)
Scheme 2
σ =0
SNR Time (s) Error
19.95205299 32.94454167 0.10112995
7.49180024 11.77460269 0.36250925
10.32376386 17.50628027 0.27512737
19.68191427 11.76090688 0.10494649
σ = 0.2
SNR Time (s) Error
19.23307275 32.91023651 0.10830608
8.64985813 11.75860752 0.33138031
11.25473578 17.47397188 0.25505126
18.88014658 11.74956258 0.11290080
σ = 0.5
SNR Time (s) Error
19.94372137 32.87894266 0.09998619
9.41809098 11.74861859 0.30465365
11.72294689 17.46788607 0.24174302
19.85957475 11.75509118 0.10117051
σ = 0.8
SNR Time (s) Error
19.53089990 32.86569788 0.10402565
8.91925339 11.75496747 0.31830758
11.13782962 17.47046525 0.25652113
19.21776060 11.74725485 0.10794305
σ =1
SNR Time (s) Error
19.60208783 32.94129273 0.10354793
8.78093178 11.75480988 0.32120149
10.98373653 17.50591555 0.25760105
19.60548952 11.75448500 0.10381043
σ =2
SNR Time (s) Error
19.84710613 32.92242259 0.10063859
8.40168602 11.78188841 0.33249226
11.47172557 17.48418060 0.24557802
19.69527417 11.75908116 0.10253729
Circuits Syst Signal Process
From illustrations, the Iterative Scheme 2 is the best way, which can improve the calculation efficiency and keep the effect of signal recovery. The Iterative Scheme 1 compared to A† algorithm is cheap.
6 Conclusion We have proposed the new algorithm based on linearized Bregman iteration and computation of generalized inverse. The algorithm for solving the compressed sensing problem is competitive in computing time and workload. Our method not only possesses all merits of linearized Bregman iterative method, but also accelerates the computing speed. Moreover, the convergence proof also provides theoretical support. Numerical experiments show that our procedures are remarkably effective in restoring sparse signal. Specially, our algorithm still preserves stable results when A concerns a large size. Our next step is to research the properties of the algorithm.
References 1. J. Cai, S. Osher, Z. Shen, Linearized Bregman iterations for compressed sensing. Math. Comput. 78(267), 1515–1536 (2009) 2. J. Cai, S. Osher, Z. Shen, Convergence of the linearized Bregman iteration for l1 -norm minimization. Math. Comput. 78(268), 2127–2136 (2009) 3. J. Cai, S. Osher, Z. Shen, Linearized Bregman iterations for frame-based image deblurring. SIAM J. Imaging Sci. 2(1), 226–252 (2009) 4. E. Candés, J. Romberg, Quantitative robust uncertainty principles and optimally sparse decompositions. Found. Comput. Math. 6(2), 227–254 (2006) 5. E. Candés, J. Romberg, T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006) 6. S. Chen, D. Donoho, M. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998) 7. D. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006) 8. H. Gao, Z. Zuo, An adaptive non-local total variation blind deconvolution employing split Bregman iteration. Circuits Syst. Signal Process. 32(5), 2407–2421 (2013) 9. Y. Ji, Z. Yang, W. Li, Bayesian sparse reconstruction method of compressed sensing in the presence of impulsive noise. Circuits Syst. Signal Process. 32(6), 2971–2998 (2013) 10. M. Lai, W. Yin, Augmented l1 and nuclear-norm models with a globally linearly convergent algorithm. SIAM J. Imaging Sci. 6(2), 1059–1091 (2012) 11. W. Li, Z. Li, A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix. Appl. Math. Comput. 215(9), 3433–3442 (2012) 12. S. Osher, Y. Mao, B. Dong, W. Yin, Fast linearized Bregman iteration for compressive sensing and sparse denoising. UCLA-CAM-Report 08-37 (2008) 13. Y. Tsaig, D. Donono, Extensions of compressed sensing. Signal Process. 86(3), 549–571 (2006) 14. S. Wang, Z. Yang, Generalized Inverse Matrix and Its Applications (Beijing University of Technology Press, Beijing, 1996), pp. 195–209 15. G. Wang, Y. Wei, S. Qiao, Generalized Inverses: Theory and Computations (Science Press, Beijing, 2004), pp. 1–8, 129–165 16. W. Yin, Analysis and generalizations of the linearized Bregman method. SIAM J. Imaging Sci. 3(4), 856–877 (2010) 17. W. Yin, S. Osher, D. Goldfarb, J. Darbon, Bregman iterative algorithms for l1 -minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008) 18. H. Zhang, L. Cheng, A− linearized Bregman iteration algorithm. Math. Numer. Sin. 32(1), 97–104 (2010)