#### DMCA

## Sparse recovery with coherent tight frame via analysis Dantzig selector and analysis LASSO (2013)

Citations: | 1 - 0 self |

### Citations

4200 | Regression shrinkage and selection via the lasso
- Tibshirani
- 1997
(Show Context)
Citation Context ...e l0-minimization. Three most renown recovery algorithms based on convex relaxation proposed in the literature are: the Basis Pursuit (BP) [7], the Dantzig selector (DS) [15], and the LASSO estimator =-=[53]-=- (or Basis Pursuit Denoising [7]): (BP) : min f̃∈Rn ‖f̃‖1 subject to ‖Af̃ − y‖2 ≤ ε, (DS) : min f̃∈Rn ‖f̃‖1 subject to ‖A∗(Af̃ − y)‖∞ ≤ λnσ, (LASSO) : min f̃∈Rn 1 2 ‖(Af̃ − y)‖22 + µnσ‖f̃‖1, here ‖ · ... |

3600 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ... 1 Introduction 1.1 Standard compressed sensing Compressed sensing predicts that sparse signals can be reconstructed from what was previously believed to be incomplete information. The seminal papers =-=[11, 12, 19]-=- have triggered a large research activity in mathematics, engineering and computer science with a lot of potential applications. ∗This work is supported by NSF of China under grant numbers 11171299 an... |

2712 | Atomic decomposition by basis pursuit
- Chen, Donoho, et al.
- 1998
(Show Context)
Citation Context ...minimization which can be viewed as a convex relaxation of the l0-minimization. Three most renown recovery algorithms based on convex relaxation proposed in the literature are: the Basis Pursuit (BP) =-=[7]-=-, the Dantzig selector (DS) [15], and the LASSO estimator [53] (or Basis Pursuit Denoising [7]): (BP) : min f̃∈Rn ‖f̃‖1 subject to ‖Af̃ − y‖2 ≤ ε, (DS) : min f̃∈Rn ‖f̃‖1 subject to ‖A∗(Af̃ − y)‖∞ ≤ λn... |

2612 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ... 1 Introduction 1.1 Standard compressed sensing Compressed sensing predicts that sparse signals can be reconstructed from what was previously believed to be incomplete information. The seminal papers =-=[11, 12, 19]-=- have triggered a large research activity in mathematics, engineering and computer science with a lot of potential applications. ∗This work is supported by NSF of China under grant numbers 11171299 an... |

1666 | Matching pursuits with time-frequency dictionaries
- Mallat, Zhang
- 1993
(Show Context)
Citation Context ...vector in the feasible set of possible solutions, which leads to an l0-minimization problem. However, solving the l0-minimization directly is NP-hard in general and thus is computationally infeasible =-=[43, 44]-=-. It is then natural to consider the method of l1-minimization which can be viewed as a convex relaxation of the l0-minimization. Three most renown recovery algorithms based on convex relaxation propo... |

1504 | Near optimal signal recovery from random projections: Universal encoding strategies
- Candès, Tao
- 2006
(Show Context)
Citation Context ...SSO provided that A satisfies a RIP condition δcs ≤ δ for some constants c, δ > 0 and that the error bound ‖A∗z‖∞ is small [15, 5, 16]. Recall that for an m× n matrix A and s ≤ n, the RIP constant δs =-=[11, 14, 20]-=- is defined as the smallest number δ such that for all s sparse vectors x̃ ∈ Rn, (1− δ)‖x̃‖22 ≤ ‖Ax̃‖22 ≤ (1 + δ)‖x̃‖22. So far, all good constructions of matrices with the RIP use randomness. It is w... |

1388 | Decoding by linear programming
- Candes, Tao
- 2005
(Show Context)
Citation Context ...rs with small or zero errors provided that the measurement matrix A satisfies a restricted isometry property (RIP) condition δcs ≤ δ for some constants c, δ > 0 and that the error bound ‖z‖2 is small =-=[13, 12, 6, 16, 29, 41]-=-. Similar results were obtained for the DS and the LASSO provided that A satisfies a RIP condition δcs ≤ δ for some constants c, δ > 0 and that the error bound ‖A∗z‖∞ is small [15, 5, 16]. Recall that... |

1386 | Stable signal recovery from incomplete and inaccurate measurements
- Candes, Romberg, et al.
- 2006
(Show Context)
Citation Context ... 1 Introduction 1.1 Standard compressed sensing Compressed sensing predicts that sparse signals can be reconstructed from what was previously believed to be incomplete information. The seminal papers =-=[11, 12, 19]-=- have triggered a large research activity in mathematics, engineering and computer science with a lot of potential applications. ∗This work is supported by NSF of China under grant numbers 11171299 an... |

868 | The Dantzig selector: Statistical estimation when p ≫ n, Annals of Statistics
- Candès, Tao
- 2007
(Show Context)
Citation Context ...d as a convex relaxation of the l0-minimization. Three most renown recovery algorithms based on convex relaxation proposed in the literature are: the Basis Pursuit (BP) [7], the Dantzig selector (DS) =-=[15]-=-, and the LASSO estimator [53] (or Basis Pursuit Denoising [7]): (BP) : min f̃∈Rn ‖f̃‖1 subject to ‖Af̃ − y‖2 ≤ ε, (DS) : min f̃∈Rn ‖f̃‖1 subject to ‖A∗(Af̃ − y)‖∞ ≤ λnσ, (LASSO) : min f̃∈Rn 1 2 ‖(Af̃... |

764 | Cosamp: Iterative signal recovery from incomplete and inaccurate samples
- Needell, Tropp
- 2009
(Show Context)
Citation Context ...compressed sensing based on pursuit algorithms in the literature, including Orthogonal Matching Pursuit (OMP) [48, 23], Stagewise OMP [24], Regularized OMP [47], Compressive Sampling Matching Pursuit =-=[46]-=-, Iterative Hard Thresholding [2], Subspace Pursuit [22] and many other variants. Refer to [55] for an overview of these pursuit methods. 1.2 l1-synthesis For signals which are sparse in the standard ... |

682 |
The restricted isometry property and its implications for compressed sensing,” Compte Rendus de l’Academie des Sciences
- Candes
- 2008
(Show Context)
Citation Context ...rs with small or zero errors provided that the measurement matrix A satisfies a restricted isometry property (RIP) condition δcs ≤ δ for some constants c, δ > 0 and that the error bound ‖z‖2 is small =-=[13, 12, 6, 16, 29, 41]-=-. Similar results were obtained for the DS and the LASSO provided that A satisfies a RIP condition δcs ≤ δ for some constants c, δ > 0 and that the error bound ‖A∗z‖∞ is small [15, 5, 16]. Recall that... |

632 | Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition
- Pati, Rezaiifar, et al.
- 1993
(Show Context)
Citation Context ... therein for more details on sparse noise. There are many other algorithmic approaches to compressed sensing based on pursuit algorithms in the literature, including Orthogonal Matching Pursuit (OMP) =-=[48, 23]-=-, Stagewise OMP [24], Regularized OMP [47], Compressive Sampling Matching Pursuit [46], Iterative Hard Thresholding [2], Subspace Pursuit [22] and many other variants. Refer to [55] for an overview of... |

625 | A simple proof of the Restricted Isometry Property for random matrices
- Baraniuk, Davenport, et al.
(Show Context)
Citation Context ...d as the smallest number δ such that for all s sparse vectors x̃ ∈ Rn, (1− δ)‖x̃‖22 ≤ ‖Ax̃‖22 ≤ (1 + δ)‖x̃‖22. So far, all good constructions of matrices with the RIP use randomness. It is well known =-=[14, 3, 42, 50]-=- that many types of random measurement matrices such as Gaussian matrices or SubGaussian matrices have the RIP constant δs ≤ δ with overwhelming probability provided that m ≥ Cδ−2s log(n/s). Up to the... |

566 | For most large underdetermined systems of linear equations, the minimal ell-1 norm solution is also the sparsest solution
- Donoho
- 2006
(Show Context)
Citation Context ...SSO provided that A satisfies a RIP condition δcs ≤ δ for some constants c, δ > 0 and that the error bound ‖A∗z‖∞ is small [15, 5, 16]. Recall that for an m× n matrix A and s ≤ n, the RIP constant δs =-=[11, 14, 20]-=- is defined as the smallest number δ such that for all s sparse vectors x̃ ∈ Rn, (1− δ)‖x̃‖22 ≤ ‖Ax̃‖22 ≤ (1 + δ)‖x̃‖22. So far, all good constructions of matrices with the RIP use randomness. It is w... |

554 |
Sparse approximate solutions to linear systems
- Natarajan
- 1995
(Show Context)
Citation Context ...vector in the feasible set of possible solutions, which leads to an l0-minimization problem. However, solving the l0-minimization directly is NP-hard in general and thus is computationally infeasible =-=[43, 44]-=-. It is then natural to consider the method of l1-minimization which can be viewed as a convex relaxation of the l0-minimization. Three most renown recovery algorithms based on convex relaxation propo... |

471 | Simultaneous analysis of Lasso and Dantzig Selector
- Bickel, Ritov, et al.
(Show Context)
Citation Context ... [13, 12, 6, 16, 29, 41]. Similar results were obtained for the DS and the LASSO provided that A satisfies a RIP condition δcs ≤ δ for some constants c, δ > 0 and that the error bound ‖A∗z‖∞ is small =-=[15, 5, 16]-=-. Recall that for an m× n matrix A and s ≤ n, the RIP constant δs [11, 14, 20] is defined as the smallest number δ such that for all s sparse vectors x̃ ∈ Rn, (1− δ)‖x̃‖22 ≤ ‖Ax̃‖22 ≤ (1 + δ)‖x̃‖22. S... |

425 | From sparse solutions of systems of equations to sparse modeling of signals and images
- Bruckstein, Donoho, et al.
- 2009
(Show Context)
Citation Context ...is but in terms of an overcomplete dictionary, which means that our signal f ∈ Rn is now expressed as f = Dx where D ∈ Rn×d (d ≥ n) is a redundant dictionary and x is (approximately) sparse, see e.g. =-=[7, 4, 8]-=- and the reference therein. Examples include signal modeling in array signal processing (oversampled array steering matrix), reflected radar and sonar signals (Gabor frames), and images with curves (C... |

325 | Iterative hard thresholding for compressed sensing
- Blumensath, Davies
- 2009
(Show Context)
Citation Context ...t algorithms in the literature, including Orthogonal Matching Pursuit (OMP) [48, 23], Stagewise OMP [24], Regularized OMP [47], Compressive Sampling Matching Pursuit [46], Iterative Hard Thresholding =-=[2]-=-, Subspace Pursuit [22] and many other variants. Refer to [55] for an overview of these pursuit methods. 1.2 l1-synthesis For signals which are sparse in the standard coordinate basis or sparse in ter... |

288 |
Subspace pursuit for compressive sensing signal reconstruction
- Dai, Milenkovic
- 2009
(Show Context)
Citation Context ...terature, including Orthogonal Matching Pursuit (OMP) [48, 23], Stagewise OMP [24], Regularized OMP [47], Compressive Sampling Matching Pursuit [46], Iterative Hard Thresholding [2], Subspace Pursuit =-=[22]-=- and many other variants. Refer to [55] for an overview of these pursuit methods. 1.2 l1-synthesis For signals which are sparse in the standard coordinate basis or sparse in terms of some other orthon... |

273 | Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit
- Donoho, Tsaig, et al.
(Show Context)
Citation Context ...s on sparse noise. There are many other algorithmic approaches to compressed sensing based on pursuit algorithms in the literature, including Orthogonal Matching Pursuit (OMP) [48, 23], Stagewise OMP =-=[24]-=-, Regularized OMP [47], Compressive Sampling Matching Pursuit [46], Iterative Hard Thresholding [2], Subspace Pursuit [22] and many other variants. Refer to [55] for an overview of these pursuit metho... |

258 | On sparse reconstruction from Fourier and Gaussian measurements
- Rudelson, Vershynin
(Show Context)
Citation Context ...d as the smallest number δ such that for all s sparse vectors x̃ ∈ Rn, (1− δ)‖x̃‖22 ≤ ‖Ax̃‖22 ≤ (1 + δ)‖x̃‖22. So far, all good constructions of matrices with the RIP use randomness. It is well known =-=[14, 3, 42, 50]-=- that many types of random measurement matrices such as Gaussian matrices or SubGaussian matrices have the RIP constant δs ≤ δ with overwhelming probability provided that m ≥ Cδ−2s log(n/s). Up to the... |

237 | Signal reconstruction from noisy random projections
- Haupt, Nowak
- 2006
(Show Context)
Citation Context ...ume that the noise vector z ∼ N(0, σ2I), i.e., z is i.i.d. Gaussian noise, which is of particular interest in signal processing and in statistics. The case 2 of Gaussian noise was first considered in =-=[33]-=-, which examined the performance of l0-minimization with noisy measurements. Since the Gaussian noise is essentially bounded (e.g. [15, 17]), all stably recovery results mentioned above for bounded er... |

189 | Adaptive greedy approximations
- Davis, Mallat, et al.
- 1997
(Show Context)
Citation Context ... therein for more details on sparse noise. There are many other algorithmic approaches to compressed sensing based on pursuit algorithms in the literature, including Orthogonal Matching Pursuit (OMP) =-=[48, 23]-=-, Stagewise OMP [24], Regularized OMP [47], Compressive Sampling Matching Pursuit [46], Iterative Hard Thresholding [2], Subspace Pursuit [22] and many other variants. Refer to [55] for an overview of... |

189 | Sparsest solutions of underdetermined linear system via ℓq-minimization for 0
- Foucart, Lai
(Show Context)
Citation Context ...rs with small or zero errors provided that the measurement matrix A satisfies a restricted isometry property (RIP) condition δcs ≤ δ for some constants c, δ > 0 and that the error bound ‖z‖2 is small =-=[13, 12, 6, 16, 29, 41]-=-. Similar results were obtained for the DS and the LASSO provided that A satisfies a RIP condition δcs ≤ δ for some constants c, δ > 0 and that the error bound ‖A∗z‖∞ is small [15, 5, 16]. Recall that... |

188 | Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit,”
- Needell, Vershynin
- 2009
(Show Context)
Citation Context ...re are many other algorithmic approaches to compressed sensing based on pursuit algorithms in the literature, including Orthogonal Matching Pursuit (OMP) [48, 23], Stagewise OMP [24], Regularized OMP =-=[47]-=-, Compressive Sampling Matching Pursuit [46], Iterative Hard Thresholding [2], Subspace Pursuit [22] and many other variants. Refer to [55] for an overview of these pursuit methods. 1.2 l1-synthesis F... |

170 |
Temlyakov, “Stable recovery of sparse overcomplete representations in the presence of noise
- Donoho, Elad, et al.
- 2006
(Show Context)
Citation Context ...ondition imposed by the standard compressed sensing assumptions. However, if D is a coherent frame, AD does not generally satisfy the standard RIP [49, 8]. Also, the mutual incoherence property (MIP) =-=[21]-=- may not apply, as it is very hard for AD to satisfy the MIP as well when D is highly correlated. 3 1.3 l1-analysis An alternative to l1-synthesis is l1-analysis, which finds the estimator f̂ directly... |

167 | Computational methods for sparse solution of linear inverse problems
- Tropp, Wright
- 2010
(Show Context)
Citation Context ... Pursuit (OMP) [48, 23], Stagewise OMP [24], Regularized OMP [47], Compressive Sampling Matching Pursuit [46], Iterative Hard Thresholding [2], Subspace Pursuit [22] and many other variants. Refer to =-=[55]-=- for an overview of these pursuit methods. 1.2 l1-synthesis For signals which are sparse in the standard coordinate basis or sparse in terms of some other orthonormal basis, the techniques above hold.... |

161 | Compressed sensing with coherent and redundant dictionaries. Applied and Computational Harmonic Analysis
- Candès, Eldar, et al.
- 2010
(Show Context)
Citation Context ...is but in terms of an overcomplete dictionary, which means that our signal f ∈ Rn is now expressed as f = Dx where D ∈ Rn×d (d ≥ n) is a redundant dictionary and x is (approximately) sparse, see e.g. =-=[7, 4, 8]-=- and the reference therein. Examples include signal modeling in array signal processing (oversampled array steering matrix), reflected radar and sonar signals (Gabor frames), and images with curves (C... |

135 | Compressed sensing and redundant dictionaries
- Rauhut, Schnass, et al.
- 2008
(Show Context)
Citation Context ...signal modeling in array signal processing (oversampled array steering matrix), reflected radar and sonar signals (Gabor frames), and images with curves (Curvelet frames), etc. The l1-synthesis (e.g. =-=[7, 49, 25]-=-) consists in finding the sparsest possible coefficient x̂ by solving an l1-minimization problem (BP or LASSO) with the decoding matrix AD instead of A, and then reconstruct the signal by a synthesis ... |

104 | Split Bregman methods and frame based image restoration, Multiscale Modeling and Simulation: A - Cai, Osher, et al. |

98 | Uniform uncertainty principle for Bernoulli and subgaussian ensembles
- Mendelson, Pajor, et al.
- 2007
(Show Context)
Citation Context ...d as the smallest number δ such that for all s sparse vectors x̃ ∈ Rn, (1− δ)‖x̃‖22 ≤ ‖Ax̃‖22 ≤ (1 + δ)‖x̃‖22. So far, all good constructions of matrices with the RIP use randomness. It is well known =-=[14, 3, 42, 50]-=- that many types of random measurement matrices such as Gaussian matrices or SubGaussian matrices have the RIP constant δs ≤ δ with overwhelming probability provided that m ≥ Cδ−2s log(n/s). Up to the... |

95 |
The widths of euclidean balls
- Garnaev, Gluskin
- 1984
(Show Context)
Citation Context ...ssian matrices or SubGaussian matrices have the RIP constant δs ≤ δ with overwhelming probability provided that m ≥ Cδ−2s log(n/s). Up to the constant, the lower bounds for Gelfand widths of l1-balls =-=[31, 30]-=- show that this dependence on n and s is optimal. The fast multiply partial random Fourier matrix has the RIP constant δs ≤ δ with very high probability provided thatm ≥ Cδ−2s(log n)4 [14, 50, 34]. In... |

84 |
Sparsity and smoothness via the fused
- Tibshirani, Saunders, et al.
- 2005
(Show Context)
Citation Context ... directly by solving an l1minimization problem. There are two most renown analysis recovery algorithms proposed in the literature: the analysis Basis Pursuit (ABP) [8] and the analysis LASSO (ALASSO) =-=[25, 54]-=-1: (ABP) : f̂ = argmin f̃∈Rn ‖D∗f̃‖1 subject to ‖Af̃ − y‖2 ≤ ε, (1.2) (ALASSO) : f̂AL = argmin f̃∈Rn 1 2 ‖(Af̃ − y)‖22 + µ‖D∗f̃‖1. (1.3) Here µ is a tuning parameter, and ε is a measure of the noise l... |

76 | New and improved Johnson-Lindenstrauss embeddings via the restricted isometry property,”
- Krahmer, Ward
- 2011
(Show Context)
Citation Context ...ies (1.6). Recall that an isotropic ψ2 vector a is one that satisfies for all v, E|〈a, v〉| = ‖v‖22 and inf{t : E exp(〈a, v〉2/t2) ≤ 2} ≤ α‖v‖2, for some constant α [42]. Very recently, Ward and Kramer =-=[35]-=- showed that randomizing the column signs of any matrix that satisfies the standard RIP results in a matrix which satisfies the Johnson-Lindenstrauss lemma. Therefore, nearly all random matrix constru... |

65 | Tight Oracle Bounds for Low-Rank Matrix Recovery from a Minimal Number of Random Measurements
- Candes, Plan
- 2009
(Show Context)
Citation Context ...A∗z‖∞ ≤ σ √ 2(1 + α)(1 + δ1) log d ) = 1− P ( ‖D∗A∗z‖∞ > σ √ 2(1 + α)(1 + δ1) log d ) ≥ 1− 1 dα √ (1 + α)π log d . 5.2 Proof of Theorem 2.1 Proof of Theorem 2.1. The proof makes use of the ideas from =-=[8, 15, 6, 10]-=-. Let f and f̂ADS be as in the theorem, and let T0 = T denote the set of the s largest coefficients of D ∗f in magnitude. Set h = f̂ADS − f and observe that by the triangle inequality ‖D∗A∗Ah‖∞ ≤ ‖D∗A... |

65 | The cosparse analysis model and algorithms
- Nam, Davies, et al.
- 2013
(Show Context)
Citation Context ...rgmin f̃∈Rn 1 2 ‖(Af̃ − y)‖22 + µ‖D∗f̃‖1. (1.3) Here µ is a tuning parameter, and ε is a measure of the noise level. Several works exist in the literature that are related to the analysis model (e.g. =-=[25, 51, 8, 1, 39, 45]-=-). It has been shown that l1-analysis and l1-synthesis approaches are exactly equivalent when D is orthogonal otherwise there is a remarkable difference between the two despite their apparent similari... |

64 | Stability results for random sampling of sparse trigonometrical polynomials
- Rauhut
(Show Context)
Citation Context ...1-balls [31, 30] show that this dependence on n and s is optimal. The fast multiply partial random Fourier matrix has the RIP constant δs ≤ δ with very high probability provided thatm ≥ Cδ−2s(log n)4 =-=[14, 50, 34]-=-. In many common settings it is natural to assume that the noise vector z ∼ N(0, σ2I), i.e., z is i.i.d. Gaussian noise, which is of particular interest in signal processing and in statistics. The cas... |

62 | Shifting inequality and recovery of sparse signals
- Cai, Wang, et al.
- 2010
(Show Context)
Citation Context ...for the standard compressed sensing, we derive similar result as in [15, Theorem 1.3]. (d) We have not tried to optimize the D-RIP condition. We expect that with a more complicated proof as in [6] or =-=[29, 16, 26, 41]-=-, one can still improve this condition. The error bound (2.1) is within a log-like factor of the minimax risk over the class of vectors which are at most s sparse in terms of D: Theorem 2.5. Let D be ... |

60 |
The split Bregman algorithm for L1 regularized problems
- Goldstein, Osher
(Show Context)
Citation Context ...of the effectiveness of the analysis approach can be found in [25] for signal denoising and in [51] for signal and image restoration. Numerical algorithms have been proposed to solve the ALASSO, e.g. =-=[32, 9, 40]-=-. More recently, Candès et al. [8] showed that the ABP recovers a signal f̂ with an error bound ‖f̂ − f‖2 ≤ C0 ‖D∗f − (D∗f)[s]‖1√ s + C1ε, (1.4) provided that A satisfies a restricted isometry proper... |

47 | Signal restoration with overcomplete wavelet transforms: Comparison of analysis and synthesis priors
- Selesnick, Figueiredo
- 2009
(Show Context)
Citation Context ...rgmin f̃∈Rn 1 2 ‖(Af̃ − y)‖22 + µ‖D∗f̃‖1. (1.3) Here µ is a tuning parameter, and ε is a measure of the noise level. Several works exist in the literature that are related to the analysis model (e.g. =-=[25, 51, 8, 1, 39, 45]-=-). It has been shown that l1-analysis and l1-synthesis approaches are exactly equivalent when D is orthogonal otherwise there is a remarkable difference between the two despite their apparent similari... |

32 |
New bounds on the restricted isometry constant δ2k.
- Li, Mo
- 2011
(Show Context)
Citation Context |

25 | Recovery of sparsely corrupted signals,”
- Studer, Kuppinger, et al.
- 2012
(Show Context)
Citation Context ...o sparse [37]. This can occur in practice due to shot noise, malfunctioning hardware, transmission errors, or narrowband interference. Several recovery techniques have been developed for sparse noise =-=[37, 52, 36]-=-. We refer the readers to [37, 52, 36] and the reference therein for more details on sparse noise. There are many other algorithmic approaches to compressed sensing based on pursuit algorithms in the ... |

17 |
The Gelfand widths of lp-balls for 0 < p ≤ 1
- Foucart, Pajor, et al.
(Show Context)
Citation Context ...ssian matrices or SubGaussian matrices have the RIP constant δs ≤ δ with overwhelming probability provided that m ≥ Cδ−2s log(n/s). Up to the constant, the lower bounds for Gelfand widths of l1-balls =-=[31, 30]-=- show that this dependence on n and s is optimal. The fast multiply partial random Fourier matrix has the RIP constant δs ≤ δ with very high probability provided thatm ≥ Cδ−2s(log n)4 [14, 50, 34]. In... |

15 |
Analysis versus synthesis
- Elad, Milanfar, et al.
(Show Context)
Citation Context ...signal modeling in array signal processing (oversampled array steering matrix), reflected radar and sonar signals (Gabor frames), and images with curves (Curvelet frames), etc. The l1-synthesis (e.g. =-=[7, 49, 25]-=-) consists in finding the sparsest possible coefficient x̂ by solving an l1-minimization problem (BP or LASSO) with the decoding matrix AD instead of A, and then reconstruct the signal by a synthesis ... |

15 |
On a generalization of the iterative softthresholding algorithm for the case of non-separable penalty
- Loris, Verhoeven
(Show Context)
Citation Context ...of the effectiveness of the analysis approach can be found in [25] for signal denoising and in [51] for signal and image restoration. Numerical algorithms have been proposed to solve the ALASSO, e.g. =-=[32, 9, 40]-=-. More recently, Candès et al. [8] showed that the ABP recovers a signal f̂ with an error bound ‖f̂ − f‖2 ≤ C0 ‖D∗f − (D∗f)[s]‖1√ s + C1ε, (1.4) provided that A satisfies a restricted isometry proper... |

10 |
On recovery of sparse signals via l1 minimization
- Cai, Xu, et al.
(Show Context)
Citation Context ...statistics. The case 2 of Gaussian noise was first considered in [33], which examined the performance of l0-minimization with noisy measurements. Since the Gaussian noise is essentially bounded (e.g. =-=[15, 17]-=-), all stably recovery results mentioned above for bounded error related to the BP, the DS and the LASSO can be extended directly to the Gaussian noise case. While the BP and the DS (or the Lasso) pro... |

9 |
A note on guaranteed sparse recovery via l1-minimization.
- Foucart
- 2010
(Show Context)
Citation Context ...for the standard compressed sensing, we derive similar result as in [15, Theorem 1.3]. (d) We have not tried to optimize the D-RIP condition. We expect that with a more complicated proof as in [6] or =-=[29, 16, 26, 41]-=-, one can still improve this condition. The error bound (2.1) is within a log-like factor of the minimax risk over the class of vectors which are at most s sparse in terms of D: Theorem 2.5. Let D be ... |

9 |
Compressed sensing with general frames via optimal-dual-based ℓ1 -analysis
- Liu, Mi, et al.
(Show Context)
Citation Context ...rgmin f̃∈Rn 1 2 ‖(Af̃ − y)‖22 + µ‖D∗f̃‖1. (1.3) Here µ is a tuning parameter, and ε is a measure of the noise level. Several works exist in the literature that are related to the analysis model (e.g. =-=[25, 51, 8, 1, 39, 45]-=-). It has been shown that l1-analysis and l1-synthesis approaches are exactly equivalent when D is orthogonal otherwise there is a remarkable difference between the two despite their apparent similari... |

7 | Perturbations of measurement matrices and dictionaries in compressed sensing
- Aldroubi, Chen, et al.
(Show Context)
Citation Context |

6 |
Exact signal recovery from corrupted measurements through the pursuit of justice
- Laska, Davenport, et al.
(Show Context)
Citation Context ...tioned recovery algorithms provide guarantees only for noise that is bounded or bounded with high probability. However, these algorithms perform suboptimally when the measurement noise is also sparse =-=[37]-=-. This can occur in practice due to shot noise, malfunctioning hardware, transmission errors, or narrowband interference. Several recovery techniques have been developed for sparse noise [37, 52, 36].... |

3 |
Real versus complex null space properties for sparse vector recovery
- Foucart, Gribonval
- 2010
(Show Context)
Citation Context ...t this work to the setting of real valued signals f ∈ Rn. For perspective, it is known that compressed sensing results ([6]) such as for the BP are also valid for complex valued signals f ∈ Cd, e.g., =-=[27]-=-. Note also that we have restricted to the tight frame case and that a signal being sparse in a non-tight frame is also interesting. 6 1.5 Notation The following notation is used throughout this paper... |

1 |
Stability and robustness of ℓ1-minimization with Weibull matrices and redundant dictionaries
- Foucart
- 2013
(Show Context)
Citation Context ...ing with general frames. Aldroubi et al. [1] showed that the ABP is robust to measurement noise, and stable with respect to perturbations of the measurement matrix A and the general frames D. Foucart =-=[28]-=- studied the ABP algorithm under the setting that the measurement matrices are Weibull random matrices. Recall that the D-RIP of a measurement matrix A, which first appeared in [8] and is a natural ex... |

1 |
Democracy in action: Quantization, saturation, and compressive
- Laskaa, Boufounosb, et al.
(Show Context)
Citation Context ...o sparse [37]. This can occur in practice due to shot noise, malfunctioning hardware, transmission errors, or narrowband interference. Several recovery techniques have been developed for sparse noise =-=[37, 52, 36]-=-. We refer the readers to [37, 52, 36] and the reference therein for more details on sparse noise. There are many other algorithmic approaches to compressed sensing based on pursuit algorithms in the ... |

1 | Compressed sensing with coherent tight frame via lq-minimization, Manuscript, Available at http://arxiv.org/abs/1105.3299
- Li, Lin
(Show Context)
Citation Context ... (1.4) provided that A satisfies a restricted isometry property adapted to D (D-RIP) condition with δ2s < 0.08, where D is a tight frame for R n. Later, the D-RIP condition is improved to δ2s < 0.493 =-=[38]-=-. Note that we denote x[s] to be the vector consisting of the s largest coefficients of x ∈ Rd in magnitude, i.e. x[s] is the best s sparse approximation to the vector x. Following [8], Liu et al. [39... |