Circuits Syst Signal Process DOI 10.1007/s00034-016-0299-2
Compressed Sensing Based on Trust Region Method Yang Wang1 · Xiuqiao Xiang1 · Shunping Zhou1 · Zhongwen Luo1 · Fang Fang1
Received: 25 April 2015 / Revised: 6 March 2016 / Accepted: 8 March 2016 © Springer Science+Business Media New York 2016
Abstract Developed in recent years, compressed sensing (CS) has saved considerable data storage and time in signal acquisition and processing, drawing the attention of many scholars in various fields. At present, a key issue is the design of an effective CS reconstruction algorithm. Unlike the commonly used CS convex optimization approaches, which require determining the direction first and later using the linear search to attain the optimal displacement, the trust region method introduced into the CS model in this paper solves a quadratic convex optimization problem in the varied circular area. The simulated reconstruction of two-dimensional images is performed via MATLAB. Meanwhile, comparative analysis is executed using different CS reconstruction methods, such as the greedy algorithms (orthogonal matching pursuit, sparse adaptive matching pursuit) and convex optimization algorithms (basis pursuit, conjugate gradient). The experimental results are summarized and analyzed, demonstrating that the proposed CS trust region method may recover the images more accurately than state-of-the-art schemes.
B
Xiuqiao Xiang
[email protected] Yang Wang
[email protected] Shunping Zhou
[email protected] Zhongwen Luo
[email protected] Fang Fang
[email protected]
1
School of Information Engineering, China University of Geosciences, Wuhan 430074, People’s Republic of China
Circuits Syst Signal Process
Keywords Compressed sensing · Signal reconstruction · Convex optimization · Quadratic programming · Trust region method
1 Introduction With the rapid development of information technology, the need to manage massive amounts of data is becoming increasingly urgent. Due to the limitations of the Nyquist sampling theorem in conventional signal processing, signals are first sampled at a high rate and are mostly compressed thereafter [25], which wastes considerable data storage and time in the signal acquisition and processing. Over the past several years, compressed sensing (CS), proposed by Donoho and Candés, has provided a new data acquisition technique that integrates sampling at sub-Nyquist rates and compression into a single stage [3,8]. The CS theory states that if signals are sparse on some bases, then they will be recovered from a small number of random linear measurements via a tractable reconstruction algorithm [4]. This property implies that fewer measurements are needed, allowing for such advantages as simplification of hardware and reduction of acquisition time and memory space required. The advantages mentioned above make CS a tremendously impactful technique for an even wider range of topics, including remote sensing image analysis, medical imaging, radar detection, and wireless networks [9,18,27]. CS theory primarily consists of three key components [17,21,24], namely, (1) sparse signal representation, (2) measurement matrix design, and (3) signal reconstruction algorithms. CS signal reconstruction is the recovery process from a low-dimensional vector to a high-dimensional vector [22]. Currently, CS reconstruction algorithms are classified into three categories [23]. The first category is greedy matching pursuit algorithms, which solve the l0 norm minimum problem. Examples include the matching pursuit (MP), orthogonal matching pursuit (OMP), stagewise orthogonal matching pursuit (StOMP), compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), regularized orthogonal matching pursuit (ROMP), and sparse adaptive matching pursuit (SAMP) [2,6,7,12,26]. The second category is convex optimization algorithms, which approach to the target signal by solving the l1 norm minimum problem [13]. This category of algorithm mainly consists of the basic pursuit (BP), conjugate gradient (CG), gradient projection for sparse reconstruction (GPSR), and Newton method [1,15,16,19,20]. The third category is combinatorial algorithms, such as Fourier sampling, simulated annealing heuristic search, and Bayesian learning [10]. All of the reconstruction algorithms mentioned above may recover the original signal reliably and quickly. Nevertheless, to satisfy the increasing requirements of CS applications, it is still a research hotspot to design an effective reconstruction algorithm. In addition to the optimization methods commonly used in CS reconstruction, other convex optimization methods exist which have received little attention of scholars. For example, the trust region method (TRM), as a nonlinear optimization approach [5,14], has rarely been applied to the CS signal reconstruction, to the best of our knowledge. In this paper, we firstly describe existing CS theory, algorithms and trust region method (TRM). Next, we introduce TRM to the CS convex programming model via
Circuits Syst Signal Process
MATLAB, and then we perform the simulated reconstruction of two-dimensional images and comparative analysis with the traditional reconstruction methods, such as the greedy algorithms (OMP, SAMP) and convex optimization algorithms (BP, CG). Finally, the experimental results validate that the proposed TRM outperforms other methods in CS reconstruction accuracy. The rest of this paper is organized as follows: In Sect. 2, we simply review the existing CS mathematical model and reconstruction algorithms. In Sect. 3, we present the principles and the computational process for TRM. In Sect. 4, we carry out the experimental analysis and comparison of CS reconstruction algorithms. In Sect. 5, we draw our conclusions.
2 Existing CS Model and Algorithm Proposed by Donoho and Candés, compressed sensing (CS) integrates sampling and compression into a single stage to overcome the limitations of the Nyquist sampling theorem [3,8]. CS theory contains a compression process and recovery process. The compression process is achieved from a high-dimensional signal x of length N via an M × N measurement matrix Φ y = Φx (1) where y denotes low-dimensional measurement vector with length M, M N . Once a small number of linear measurement y is observed, the reconstruction process in CS is to estimate an unknown sparse vector x from Eq. (1), which is a NP-hard problem that needs to solve all the possible permutations and combinations of the signal for the dimensionality reduction via M < N. CS theory exploits the sparsity of x to reformulate the original problem (1) alternatively into a 0 norm optimization issue as follows: min x0 s.t. y = Φx
(2)
where the l0 -norm denotes the number of nonzero elements in x. If the original signal x with length N is not sparse but is compressible, then x has a sparse representation C on some orthogonal transform base Ψ , i.e., x = ΨC
(3)
where the orthogonal base Ψ is independent from measurement matrix Φ, and the transformed coefficient vector C contains k nonzero elements. Next, let Φψ = , Eq. (2) may be rewritten to the following form min C0 = Ψ −1 x0 y = Φx = ΦΨ C = C
(4)
According to Eqs. (2) and (4), the recovery of sparse or compressible signal from compressed measurement y constitutes a 0 -norm minimization problem, which can
Circuits Syst Signal Process
be solved by a number of alternative and approximate approaches. The most typical delegates available should be the greedy matching pursuit algorithms including MP, OMP, StOMP, CoSaMP, SP, ROMP, SAMP [2,6,7,12,26]. These algorithms use the atom support set to reconstruct the original signal by looking for the inner product (similarity) between the measurement vector y and the observation matrix. Additionally, a common practice in CS is to replace the non-convex 0 -norm · 0 by the convex 1 -norm · 1 because ‘1 -norm minimization’ gives the same result as ‘0 -norm minimization’ with overwhelming probability [16]. That is, Eq. (2) can be transformed into min x1 s.t. y = Φx
(5)
where 1 -norm is defined by x1 =
m
|xi |1 (i = 1, 2, . . . m)
(6)
i=1
Note that the constraint condition in Eq. (5) is equivalent to making the following quadratic function (7) as small as possible 1 y − Φx22 2
(7)
Because the essence of optimization issue (5) is to make the objective function x1 and quadratic function (7) minimize simultaneously, Eq. (5) may be converted to an unconstrained problem by the Lagrange multiplier method as follows [11]. min F(x) =
1 y − Φx22 + τ x1 2
(8)
In order to address Eq. (8), vectors u and v are introduced to rewrite signal x as follows: x = u − v, u ≥ 0, v ≥ 0
(9)
where vector u satisfies u i = (x)+ , vector v satisfies vi = (−xi )+ (i = 1, . . . , N ), and operator (•)+ is defined as (xi )+ = max{0, xi }, i.e., u denotes the positive part of signal x, while −v represents the negative part of signal x. Let vector 1TN = [1, 1 . . . , 1] with size of 1 × N, and each component of 1TN is 1, then the l1 norm may be expressed as x1 = 1TN u + 1TN v | [1, 1 . . . 1]T [u 1 , u 2 . . . u N ] + [1, 1 . . . 1]T [v1 , v2 . . . v N ] (10) Substitute Eq. (9) into the objective function, and Eq. (8) may be written as follows
Circuits Syst Signal Process
min
1 y − Φ(u − v)22 + τ 1TN u + τ 1TN v 2
s.t. u ≥ 0, v ≥ 0
(11)
Let z = [u, v]T , Eq. (11) can be written into a constrained quadratic programming problem as follows: 1 min F(z) = cT z + z T Bz s.t. z ≥ 0 2
(12)
where
c=
T τ 12N
−b + , b = Φ T y, b
Φ T Φ −Φ T Φ B= −Φ T Φ Φ T Φ
(13)
Because the local minimization of convex optimization problem corresponds to the global minimum value and convex optimization theory is already relatively mature, most of the convex optimization algorithms can be directly applied to the CS issue described in Eqs. (12) and (13). The most classic delegates include the basis pursuit (BP), conjugate gradient (CG), greedy basis pursuit (GBP), and the gradient projection for sparse reconstruction (GPSR) [1,15,16,19,20].
3 Trust Region Method From Sect. 2, the existing convex optimization algorithms (such as BP, CG, and GPSR) firstly determine the search direction and then use the linear scheme to obtain the optimal step size. In contrast with the commonly used convex optimization approaches, the trust region method (TRM), which was proposed by Powell in the 1970s and received considerable attention in the nonlinear numerical optimization field, firstly reduces, maintains or expands the trust region radius by measuring the approximation extent between the objective function and corresponding quadratic function, later calculates the optimal displacement and generates a new iterative point within the circular trust region [5,14]. 3.1 Principle and Steps Suppose the trust region issue corresponds to an unconstrained optimization problem with the following form: (14) minn F(z) z∈R
Let zk denote the k-th iteration point, set Fk = F(z k ), gk = ∇ F(z k ), Hesse matrix Bk = ∇ 2 F(z k ),and the k-th step optimal displacement dk , then, the actual declining of function F(z) at the k-th step can be denoted as Fk Fk = Fk (zk ) − Fk (zk + dk ) The k-th sub-problem in TRM can be expressed as
(15)
Circuits Syst Signal Process
1 min qk (d) = ckT d + d T Bk d 2 s.t ||d||2 ≤ Rk
(16)
where the solution d represents the optimal displacement of iteration points zk , Rk is the trust region radius, • denotes 2 norm, the parameter ck and the k-th Hesse matrix Bk are the same as Eq. (13). The decline of the quadratic function [see Eq. (16)] can be written as qk qk = qk (0) − qk (dk )
(17)
From Eqs. (15) and (17), the ratio rk is used to compare the decline extent of the objective function with that of corresponding quadratic function and is defined as rk = Fk /qk .
(18)
By analyzing the value of rk , the trust region radius dk is adjusted according to the algorithm flow shown in Fig. 1. Corresponding to Fig. 1, the specific steps for TRM are listed as follows: 1. Initialization of parameters, η1 ,η2 must satisfy 0 ≤ η1 ≤ η2 < 1, which are used to evaluate the approximation degree between the objective function and quadratic function. Parameters τ1 ,τ2 must satisfy 0 < τ1 < 1 < τ2 , which are applied to renew the trust region radius, the termination threshold 0 ≤ ε << 1, and the iteration number k=0. 2. Calculate the gradient of objective function, gk = ∇ F(z k ). If gk ≤ ε, the algorithm is terminated. 3. Solve the sub-problem mentioned in Eq. (15), and obtain the iteration displacement dk by using the scheme described in Sect. 3.2 4. Calculate rk by Eq. (18), then, reduce, maintain or expand the trust region radius Rk to Rk+1 based on the following strategy.
Rk+1
⎧ i f r k ≤ η1 ⎨ τ1 Rk , i f η 1 < r k < η2 = Rk , ⎩ min{τ2 Rk , Rmax }, i f rk ≥ η2 , dk = Rmax
(19)
rk ≤ η1 means that there is no obvious decrease in the objective function, and the trust region radius should be reduced by Rk+1 = τ1 Rk . Subsequently, the algorithm turns to step 3. η1 < rk < η2 denotes that the objective function has a certain degree of decline compared with the corresponding quadratic function, and the trust region radius should keep unchanged, i.e., Rk+1 = Rk . rk ≥ η2 , dk = k means that an obvious amount of descent exists for the objective function, and the trust region radius should be expanded, i.e., Rk+1 = min{τ2 Rk , Rmax }. Note that in the above case where rk > η1 , the iteration point z k should be correspondingly updated to z k+1 = z k + dk . After that, the algorithm turns to step 2.
Circuits Syst Signal Process
Fig. 1 Algorithm flow of trust region method
3.2 Optimal Solving of the Sub-problem Obviously, a solution d exists for the sub-problem described in Eq. (16) only when the following conditions are satisfied.
λ ≥ 0, Rk2 − d22 ≥ 0, λ(Rk2 − d22 ) = 0 (Bk + λI )d − gk = 0
(20)
Circuits Syst Signal Process
In order to solve Eq. (20), a smooth function φ(μ, a, b) is proposed as follows: φ(μ, a, b) = a + b −
(a − b)2 + 4μ2
(21)
Let μ = 0, the following property may be drawn from Eq. (21): φ(0, a, b) = 0 ⇔ a ≥ 0, b ≥ 0, ab = 0
(22)
By comparing Eqs. (20) and (22), it is easy to obtain Φ(μ, λ, 2k − d22 ) = 0. Thus, Eq. (20) is equivalent to ⎤ μ H (z) = H (μ, λ, d) = ⎣ Φ(μ, λ, 2k − d22 ) ⎦ = 0 (Bk + λI )d − gk ⎡
(23)
where 2 φ μ, λ, 2k − d22 = λ − d22 + 2k − λ + d22 − 2k + 4μ2
(24)
Because matrix H (z) with size of 3×1 is continuously differentiable, the derivation of H (z) with respect to μ, λ, d can be computed as follows 4μ , φμ μ, λ, 2k − d22 = − 2 λ + d22 − 2k + 4μ2 λ + d22 − 2k := 1 − θk , φλ μ, λ, 2k − d22 = 1 − 2 λ + d22 − 2k + 4μ2 ⎛ ⎞ 2 − 2 λ + d 2 k ⎠ := −2(1+θk )d T . φd μ, λ, 2k − d22 = −2d T ⎝1 + 2 2 2 2 λ+d2 −k +4μ (25) Meanwhile, the 3 × 3 Jacobi matrix H (z) is obtained and expressed as ⎤ 1 0 0 H (z) = ⎣ φμ φλ −2(1 + θk )d T ⎦ 0 d Bk + λI ⎡
(26)
To address Eq. (23), the nonnegative function is defined as follows β(z) = γ H (z) min{1, H (z)}.
(27)
Then, the displacement d j in step 3 of Sect. 3.1 could be obtained by solving the following equation.
Circuits Syst Signal Process
H (z j ) + H (z j )z j = β j z.
(28)
where z j = (μ j , λ j , d j ), z = (μ0 , 0, 0), H (z j ) is the Jacobi matrix of H (z j ), β j corresponded to the jth β(z). The specific steps to attain the displacement d j are listed as follows: I. Initialization: Set the step size factor δ and the termination judgment factor δ, σ ∈ (0, 1). Auxiliary parameters μ0 > 0, λ0 > 0, displacement d0 ∈ R n , initial point z 0 = (μ0 , λ0 , d0 ), γ ∈ (0, 1), γ μ0 < 1, γ H (z 0 ) < 1, and the iteration number j = 0. II. If H (z j ) = 0, the corresponding iteration point z j is the optimal solution of the sub-problem, and the whole algorithm is terminated. Otherwise, Eq. (27) is calculated and the result β j produced is taken as an initial condition for the next iteration. III. By solving Eq. (29), the displacement z j = (μ j , λ j , d j ) is obtained. H (z j ) + H (z j )z j = B j z
(29)
IV. If the smallest nonnegative integer m j satisfies the following inequality H (z j + δ m j z j ) ≤ [1 − σ (1 − βμ0 )δ m j ]H (z j ),
(30)
then H (z j ) is gradually approaching to the optimal solution with the displacement δ m i z j , update the iteration point z j+1 = z j + δ m j z j and return to step II.
4 CS Experiment Based on Trust Region Method In this section, we introduce the trust region method (TRM) described in Sect. 3 into the CS reconstruction of 256×256 ‘Lena’ image, ‘Pepper’ image, ‘Brain’MRI medical image and‘Beihai’remote sensing image using an Inter(R) Core(TM) i3 CPU processor, 4.00GB RAM, 64-bit operating system and Matlab 2011a compiler environment. We perform the comparison analysis with the different reconstruction algorithms (such as OMP, SAMP, BP, CG) mentioned in Sect. 2. For TRM described in Sect. 3, the variable τ1 , used to reduce the trust region radius, is set as τ1 = 0.5; τ2 , used to expand the trust region radius, is set as τ2 = 2.0; the approximate parameters η1 , η2 are set as η1 = 0.05, η2 = 0.75, the termination threshold ε in step 2 of Sect. 3.1 is set as ε = 1.0e−6 and the maximum radius Rmax in Eq. (19) is set as Rmax = 2.0. In addition, the step size factor is set as δ = 0.2, which is applied to make the objective function have a certain amount of decline [see Eq. (30)]. The auxiliary parameters in step I of Sect. 3.2 are set as μ0 = 0.05, λ0 = 0.05, and γ = 0.05. Using the same Gaussian random measurement matrix and wavelet sparse transform, the experimental images with varied compression ratios and different CS reconstruction methods are presented in Figs. 2, 3, 4, 5 and 6, and the numerical results versus the compression ratios (M/N), images and reconstruction methods are
Circuits Syst Signal Process
Fig. 2 ‘Lena’ image recovered by different reconstruction methods under Gaussian random measurement matrix M = 3N/10, N = 256. a Original image, b TRM, c OMP, d SAMP, e BP, f CG
Circuits Syst Signal Process
Fig. 3 ‘Pepper’ image recovered by different reconstruction methods under Gaussian random measurement matrix M = 7N/10, N = 256. a Original image, b TRM, c OMP, d SAMP, e BP, f CG
Circuits Syst Signal Process
Fig. 4 ‘Pepper’ image quality PSNR versus the reconstruction method and compression ratio under Gaussian random measurement matrix P = M/N, N = 256
summarized in Tables 1 and 2. The peak signal-to-noise ratio (PSNR)is defined in Eq. (31). 255 × 255 (31) PSNR = 10 log D2 m n
H (i, j)− Hˆ (i, j)
2
where = , H and Hˆ denote the pixel matrixes of the m×n m × n original image and recovered image, respectively. As illustrated in Figs. 2, 3, 4, 5 and 6 and Table 1, the PSNR of ‘Lena,’ ‘Pepper,’‘Brain,’ and ‘Beihai’ images recovered by the proposed TRM always maintains at 48, which is much higher than that of the reconstruction algorithms (such as OMP, SAMP, BP, CG). Particularly, the performance disparity of TRM in recovering different images is less than that of other algorithms, which indicates that TRM is a stable, universal CS reconstruction method and TRM may better meet the special needs of other areas. Additionally, the compression ratio p = M/N has an important effect on the greedy algorithms (such as OMP, SAMP) because the image PSNR varies greatly with the compression ratio (see p = 0.3, 0.7). The performance of the convex optimization algorithms (such as BP, CG) is stable, less affected by the compression ratio, as shown in Fig. 4. Finally, the proposed TRM has no superiority over other methods in the reconstruction speed, as verified in Table 2. However, from Tables 1 and 2, we conclude that TRM is the best choice to use for signal reconstruction with high precision in the cases where the time requirement is not strict. D2
i=1
j=1
Circuits Syst Signal Process
Fig. 5 ‘Brain’ image recovered by different reconstruction methods under Gaussian random measurement matrix M = 3N/10, N = 256. a Original image, b TRM, c OMP, d SAMP, e BP, f CG
Circuits Syst Signal Process
Fig. 6 ‘Beihai’ image recovered by different reconstruction methods under Gaussian random measurement matrix M = 7N/10, N = 256. a Original image, b TRM, c OMP, d SAMP, e BP, f CG
Circuits Syst Signal Process Table 1 Image quality PSNR versus the reconstruction method and compression ratio (M/N) Image
PSNR
Method OMP
SAMP
BP
CG
TRM
M/N = 0.3
18.3697
19.4162
16.0820
34.1449
48.6822
M/N = 0.7
30.9663
30.0293
29.5743
34.0846
48.0074
Lena
M/N = 0.3
18.4075
21.0535
17.3629
34.1466
48.7311
M/N = 0.7
31.2072
31.2215
31.3919
34.0095
48.0119
Brain
M/N = 0.3
16.8305
21.0352
21.6426
34.1214
49.3705
M/N = 0.7
33.1401
34.2434
34.8236
33.3115
48.7116
Beihai
M/N = 0.3
18.0038
19.4339
19.6851
34.1190
48.6695
M/N = 0.7
26.0539
26.6252
28.9588
33.2966
48.0103
Pepper
Table 2 Reconstruction time versus the reconstruction method and compression ratio (M/N) Time
Image
Method OMP
Pepper Lena
SAMP
BP
CG
TRM 19.6179
M/N = 0.3
12.3627
3.7204
2.9826
4.1829
M/N = 0.7
33.1352
44.4695
7.6804
4.2962
19.7525
M/N = 0.3
12.8819
4.1036
3.1049
4.1425
18.0964
M/N = 0.7
33.7350
44.7422
7.7515
4.3040
17.9576
A reason for the above results may be the differences in the algorithm principles. As mentioned in Sects. 2 and 3, the greedy matching pursuit algorithms (such as OMP, SAMP) look for similarity between the measurement vector y and the observation matrix, then extend the atomic support set continuously by using the least squares method; the convex optimization algorithms (such as BP, CG) firstly ascertain the direction and then use the linear search to obtain the optimal displacement at each step; while TRM proposed in this paper firstly determines the circular trust region, then calculates the optimal displacement and generates new iterative points, in succession. TRM needs to adjust the trust region radius and repeat above procedure. Therefore, comparatively speaking, TRM is a nonlinear numerical optimization method. It achieves or approaches a global optimal solution, although requiring additional time to adjust the trust region radius, through measuring the approximation extent between the objective function and corresponding quadratic function. Nevertheless, the traditional methods, including OMP, SAMP, BP, CG, only receive the locally optimal solution and bear less computational tasks over the entire iterative process.
5 Conclusions In the age of big data, it is challenging to manage the acquisition, storage and transmission of massive signals using the traditional Shannon (Nyquist) sampling theorem. To
Circuits Syst Signal Process
address this issue, compressed sensing (CS), proposed in the last several years, integrates signal sampling at sub-Nyquist rates and compression into a single stage, and its allure has attracted the attention of many well-known scholars. Presently, numerous works have been undertaken to seek effective CS reconstruction algorithms, such as the greedy matching pursuit algorithms (OMP, SAMP) and convex optimization algorithms (BP, CG). Nevertheless, other convex optimization methods exist that are never used in the CS field. This paper introduces the trust region method into the CS quadratic programming problem, as a nonlinear optimization approach that iteratively updates the trust region radius to determine the optimal displacement. The simulated reconstruction and comparisons are performed on the different two-dimensional images in MATLAB, and the experimental results confirm that the proposed CS trust region method is still able to recover the original images accurately within the acceptable time, even in cases of low compression ratios, and also has an obvious advantage in the reconstruction accuracy over frequently used schemes. Acknowledgments This investigation is supported by the National Natural Science Foundation of China under Grant Nos. 61202051 and 41371422 and supported by the Special Fund for Basic Scientific Research of Central Colleges, China University of Geosciences (Wuhan) under Grant No. CUG130416.
References 1. C. Bilen, G. Puy, R. Gribonval, L. Daudet, Convex optimization approaches for blind sensor calibration using sparsity. IEEE Trans. Inform. Theory 62(18), 4847–4856 (2014) 2. S.S. Cai, J.W. Zhang, D.H. Feng, M. Bao, Improved regularized orthogonal matching pursuit method used in estimation of directions of arrival. Acta Acust. United Ac. 39(1), 35–41 (2014) 3. E.J. Candés, The restricted isometry property and its implications for compressed sensing. C. R. Math. 346(9–10), 589–592 (2008) 4. E.J. Candés, T. Tao, Near optional signal recovery from random projections: universal encoding strategies. IEEE Trans. Inform. Theory 52(12), 5406–5425 (2006) 5. J.S. Chen, The semismooth-related properties of a merit function and a descent method for the nonlinear complementarity problem. J. Glob. Optim. 36(4), 565–580 (2006) 6. W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inform. Theory 55(5), 2230–2249 (2009) 7. W. Dan, R.H. Wang, Robustness of orthogonal matching pursuit under restricted isometry property. Sci. China Math. 57(3), 627–634 (2014) 8. D.L. Donoho, Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289–1306 (2006) 9. H.T. Fabich, M. Benning, A.J. Sederman, D.J. Holland, Ultrashort echo time (UTE) imaging using gradient pre-equalization and compressed sensing. J. Magn. Reson. 245, 116–124 (2014) 10. A.C. Gilbert, S. Muthukrishnan, M.J. Strauss, Improved time bounds for near optimal sparse Fourier representations, in Proceedings of SPIE, Wavelets XI[C], vol. 5914 (International Society for Optical Engineering, Bellingham, 2005), pp. 1–15 11. M.R. Hestenes, Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969) 12. B.X. Huang, T. Zhou, Recovery of block sparse signals by a block version of StOMP. Signal Process. 106, 231–244 (2015) 13. X.L. Huang, Y.P. Liu, L. Shi, V.H. Sabine, J.A.K. Suykens, Two-level l1 minimization for compressed sensing. Signal Process. 108, 459–475 (2015) 14. C. Kanzow, Some noninterior continuation methods for linear complementarity problems. SIAM J. Matrix Anal. Appl. 17, 851–868 (1996) 15. S.J. Kim, K. Koh, M. Lustig, D. Gorinevsky, A method for large scale regularized least squares. IEEE J. STSP 4(1), 606–617 (2007) 16. P.T. Lauzier, J. Tang, G.H. Chen, Prior image constrained compressed sensing: implementation and performance evaluation. Med. Phys. 39(1), 66–80 (2012)
Circuits Syst Signal Process 17. H.F. Li, Y.L. Fu, Q.H. Zhang, R. Rong, A generalized hard thresholding pursuit algorithm. Circuits Syst. Signal Process. 33(4), 1313–1323 (2014) 18. S.L. Li, K. Liu, F. Zhang, L. Zhang, L.L. Xiao, D.P. Han, Innovative remote sensing imaging method based on compressed sensing. Opt. Laser Technol. 63, 83–89 (2014) 19. Y.B. Liu, N. Li, R. Wang, Y.K. Deng, Achieving high quality three dimensional InISAR imageries of maneuvering target via super-resolution ISAR imaging by exploiting sparseness. IEEE Geosci. Remote Sens. 11(4), 828–832 (2014) 20. Y.W. Liu, J.F. Hu, A neural network for l1-l2 minimization based on scaled gradient projection: application to compressed sensing. Neurocomputing 173, 988–993 (2015) 21. W.T. Lv, J.F. Wang, W.X. Yu, Z. Tan, Improvement of semi-random measurement matrix for compressed sensing. IEICE Trans. Fundam. Electron. E97A(6), 1426–1429 (2014) 22. A. Majumdar, R.K. Ward, T. Aboulnasr, Algorithms to approximately solve NP hard row-sparse MMV recovery problem: application to compressive color imaging. JETCAS 2(3), 362–369 (2014) 23. D. Needell, J.A. Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Commun. ACM 26(3), 301–321 (2009) 24. T.T. Qiao, W.G. Li, B.Y. Wu, A new algorithm based on linearized bregman iteration with generalized inverse for compressed sensing. Circ. Syst. Signal Process. 33(5), 1527–1539 (2014) 25. Sanjit K. Mitra, Digital Signal Processing (McGraw-Hill Science/Engineering/Math, New York, 2005) 26. D. Sundman, S. Chatterjee, M. Skoglund, Distributed greedy pursuit algorithms. Signal Process. 105, 298–315 (2014) 27. J.G. Yang, T. Jin, X.T. Huang, J. Thompson, Z.M. Zhou, Sparse MIMO array forward-looking GPR imaging based on compressed sensing in clutter environment. IEEE Trans. Geosci. Remote 52(7), 4480–4494 (2014)