#### DMCA

## Compressive sensing (2007)

### Cached

### Download Links

- [omni.isr.ist.utl.pt]
- [www.convexoptimization.com]
- [www.dsp.ece.rice.edu]
- [www-dsp.rice.edu]
- [arxiv.org]
- [www-ece.rice.edu]
- [www.ece.rice.edu]
- [www.ece.rice.edu]
- [www.ece.rice.edu]
- [iweb.tntech.edu]
- [www.ece.rice.edu]
- [arxiv.org]
- [arxiv.org]
- [www.merl.com]
- [human-robot.sysu.edu.cn]
- [arxiv.org]
- DBLP

### Other Repositories/Bibliography

Venue: | IEEE Signal Processing Mag |

Citations: | 671 - 61 self |

### Citations

3542 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...se is that from M ≥ cK log(N/K) iid Gaussian measurements we can exactly reconstruct K-sparse vectors and closely approximate compressible vectors stably with high probability via the ℓ1 optimization =-=[1, 2]-=- ̂s = argmin ‖s ′ ‖1 such that Θs ′ = y. (6) This is a convex optimization problem that conveniently reduces to a linear program known as basis pursuit [1, 2] whose computational complexity is about O... |

3136 |
A Wavelet Tour of Signal Processing
- Mallat
- 1998
(Show Context)
Citation Context ...ere the representation (1) has just a few large coefficients and many small coefficients. Compressible signals are well approximated by K-sparse representations; this is the basis of transform coding =-=[3]-=-. For example, natural images tend to be compressible in the discrete cosine transform (DCT) and wavelet bases [3] on which the JPEG and JPEG-2000 compression standards are based. Audio signals and ma... |

2681 | Atomic decomposition by basis pursuit
- Chen, Donoho, et al.
- 1998
(Show Context)
Citation Context ...andom variable. [25] 9This corresponds to a linear program that can be solved in polynomial time [2, 3]. Adaptations to deal with additive noise in y or x include basis pursuit with denoising (BPDN) =-=[26]-=-, complexitybased regularization [27], and the Dantzig Selector [28]. The second approach finds the sparsest x agreeing with the measurements y through an iterative, greedy search. Algorithms such as ... |

2559 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ...increasing the sampling rate or density beyond the current state-of-the-art is very expensive. In this lecture, we will learn about a new technique that tackles these issues using compressive sensing =-=[1, 2]-=-. We will replace the conventional sampling and reconstruction operations with a more general linear measurement scheme coupled with an optimization in order to acquire certain kinds of signals at a r... |

1510 |
Embedded image coding using zerotrees of wavelet coefficients
- Shapiro
- 1993
(Show Context)
Citation Context ...oth signal regions will give rise to regions of small wavelet coefficients. This “connected tree” property has been well-exploited in a number of wavelet-based processing [17, 36, 37] and compression =-=[38, 39]-=- algorithms. In this section, we will specialize the theory developed in Sections III and IV to a connected tree model T . A set of wavelet coefficients Ω forms a connected subtree if, whenever a coef... |

1476 | Practical signal recovery from random projections - Candès, Romberg - 2005 |

1402 | An introduction to compressive sampling
- Candes, Wakin
- 2008
(Show Context)
Citation Context ...rank deficient, and hence loses information in general, it can be shown to preserve the information in sparse and compressible signals if it satisfies the so-called restricted isometry property (RIP) =-=[3]-=-. Intriguingly, a large class of random matrices have the RIP with high probability. To recover the signal from the 2compressive measurements y, we search for the sparsest coefficient vector α that a... |

1367 | Decoding by linear programming
- Candes, Tao
- 2005
(Show Context)
Citation Context ... begin from the following concentration of measure for the largest singular value of a M × K submatrix ΦT , |T | = K, of an M × N matrix Φ with i.i.d. subgaussian entries that are properly normalized =-=[25, 45, 46]-=-: ( P σmax(ΦT) > 1 + √ K M + τ + β ) ≤ e −Mτ2 /2 . For large enough M, β ≪ 1; thus we ignore this small constant in the sequel. By letting τ = j r√ 1 + ǫK − 1 − √ K M (with the appropriate value of j ... |

856 | The Dantzig selector: statistical estimation when p is much larger than n
- Candès, Tao
- 2007
(Show Context)
Citation Context ... be solved in polynomial time [2, 3]. Adaptations to deal with additive noise in y or x include basis pursuit with denoising (BPDN) [26], complexitybased regularization [27], and the Dantzig Selector =-=[28]-=-. The second approach finds the sparsest x agreeing with the measurements y through an iterative, greedy search. Algorithms such as matching pursuit, orthogonal matching pursuit [29], StOMP [30], iter... |

747 | Cosamp: Iterative signal recovery from incomplete and inaccurate samples
- Needell, Tropp
- 2009
(Show Context)
Citation Context ...ssible signals is independent of N. To take practical advantage of this new theory, we demonstrate how to integrate structured sparsity models into two state-of-the-art CS recovery algorithms, CoSaMP =-=[11]-=- and iterative hard thresholding (IHT) [12–16]. The key modification is surprisingly simple: we merely replace the nonlinear sparse approximation step in these greedy algorithms with a structured spar... |

732 | An iterative thresholding algorithm for linear inverse problems with a sparsity constraint - Daubechies, Defrise, et al. - 2004 |

683 |
The restricted isometry property and its implications for compressed sensing
- Candés
- 2008
(Show Context)
Citation Context ...y this expression to ‖x − ̂x‖2 ≤ C1GK −s √ 2s + C2GK −s s − 1/2 + C3‖n‖2. (9) 10For the recovery algorithm (6), we obtain a bound very similar to (8), albeit with the ℓ2-norm error component removed =-=[32]-=-. III. STRUCTURED SPARSITY AND COMPRESSIBILITY While many natural and manmade signals and images can be described to first-order as sparse or compressible, the support of their large coefficients ofte... |

579 |
The Concentration of Measure Phenomenon
- Ledoux
- 2001
(Show Context)
Citation Context ... begin from the following concentration of measure for the largest singular value of a M × K submatrix ΦT , |T | = K, of an M × N matrix Φ with i.i.d. subgaussian entries that are properly normalized =-=[25, 45, 46]-=-: ( P σmax(ΦT) > 1 + √ K M + τ + β ) ≤ e −Mτ2 /2 . For large enough M, β ≪ 1; thus we ignore this small constant in the sequel. By letting τ = j r√ 1 + ǫK − 1 − √ K M (with the appropriate value of j ... |

544 | Sparse approximate solutions to linear system - NATARAJAN - 1995 |

509 | Image denoising using scale mixtures of Gaussians in the wavelet domain
- Portilla, Strela, et al.
- 2003
(Show Context)
Citation Context ...to 22the root. Moreover, smooth signal regions will give rise to regions of small wavelet coefficients. This “connected tree” property has been well-exploited in a number of wavelet-based processing =-=[17, 36, 37]-=- and compression [38, 39] algorithms. In this section, we will specialize the theory developed in Sections III and IV to a connected tree model T . A set of wavelet coefficients Ω forms a connected su... |

413 | Wavelet-based statistical signal processing using hidden markov models. to appear
- Crouse, Nowak, et al.
- 1998
(Show Context)
Citation Context ...on experiments. The first structured sparsity model accounts for the fact that the large wavelet coefficients of piecewise smooth signals and images tend to live on a rooted, connected tree structure =-=[17]-=-. ) , the number of K-sparse Using the fact that the number of such trees is much smaller than ( N K 4(a) test signal (b) CoSaMP (RMSE = 1.123) Fig. 1. (c) ℓ1-optimization (RMSE = 0.751) (d) model-ba... |

357 | Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit
- Tropp, Gilbert, et al.
- 2006
(Show Context)
Citation Context ...ure enables signal recovery from a reduced number of CS measurements, both for the single signal case [7, 8] and the signal ensemble case [9], through the use of specially tailored recovery algorithm =-=[7, 8, 35]-=-. However, the robustness guarantees for such algorithms either are restricted to exactly sparse signals and noiseless measurements, do not have explicit bounds on the number of necessary measurements... |

347 | An EM algorithm for wavelet-based image restoration - Figueiredo, Nowak - 2003 |

344 | Sampling Signals with Finite Rate of Innovation
- Vetterli, Marziliano, et al.
- 2002
(Show Context)
Citation Context ...measure it at a rate below Nyquist. An example of a practical analog compressive sensing system — a so-called “analog-to-information” converter — is given in [7]; there are interesting connections to =-=[8]-=-. 6(a) (b) (c) Figure 2: (a) A sparse vector s lies on a K-dimensional hyperplane aligned with the coordinate axes in R N and thus close to the axes. (b) Compressive sensing recovery via ℓ2 minimizat... |

319 | Iterative hard thresholding for compressed sensing
- Blumensath, Davies
- 2009
(Show Context)
Citation Context ...ng all of the ℓ1 techniques mentioned above and the CoSaMP, SP, and IHT iterative techniques, offer provably stable signal recovery with performance close to optimal K-term approximation (recall (3)) =-=[2, 3, 11, 16]-=-. For a random Φ, all results hold with high probability. For a noise-free, K-sparse signal, these algorithms offer perfect recovery, meaning that the signal ̂x recovered from the compressive measurem... |

270 | Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit. 2007. available online at http://www.dsp.ece.rice.edu/cs
- Donoho, Tsaig, et al.
(Show Context)
Citation Context ...ector [28]. The second approach finds the sparsest x agreeing with the measurements y through an iterative, greedy search. Algorithms such as matching pursuit, orthogonal matching pursuit [29], StOMP =-=[30]-=-, iterative hard thresholding (IHT) [12–16], CoSaMP [11], and Subspace Pursuit (SP) [31] all revolve around a best L-term approximation for the estimated signal, with L varying for each algorithm; typ... |

239 | Signal reconstruction fromnoisy randomprojections
- Haupt, NowakR
(Show Context)
Citation Context ...ds to a linear program that can be solved in polynomial time [2, 3]. Adaptations to deal with additive noise in y or x include basis pursuit with denoising (BPDN) [26], complexitybased regularization =-=[27]-=-, and the Dantzig Selector [28]. The second approach finds the sparsest x agreeing with the measurements y through an iterative, greedy search. Algorithms such as matching pursuit, orthogonal matching... |

214 | Robust recovery of signals from a structured union of subspaces
- Eldar, Mishali
- 2009
(Show Context)
Citation Context ...ential performance gains on a treecompressible, piecewise smooth signal. The second structured sparsity model accounts for the fact that the large coefficients of many sparse signals cluster together =-=[8, 9]-=-. Such a so-called block sparse model is equivalent to a joint sparsity model for an ensemble of J, length-N signals [10], where the supports of the signals’ large coefficients are shared across the e... |

189 | Signal recovery from partial information via orthogonal matching pursuit
- Tropp, Gilbert
- 2005
(Show Context)
Citation Context ...roach to stability is to ensure that the measurement matrix Φ is incoherent with the sparsifying basis Ψ in the sense that the vectors {φ j} cannot sparsely represent the vectors {ψ i} and vice versa =-=[1, 2, 4]-=-. The classical example features delta spikes and Fourier sinusoids playing the roles of {φ j} and {ψ i}; the Fourier uncertainty principle immediately yields the incoherence. So, given a sparsifying ... |

182 | Bayesian tree-structured image modeling using Wavelet-domain hidden Markov models
- Romberg, Choi, et al.
- 2001
(Show Context)
Citation Context ...to 22the root. Moreover, smooth signal regions will give rise to regions of small wavelet coefficients. This “connected tree” property has been well-exploited in a number of wavelet-based processing =-=[17, 36, 37]-=- and compression [38, 39] algorithms. In this section, we will specialize the theory developed in Sections III and IV to a connected tree model T . A set of wavelet coefficients Ω forms a connected su... |

135 | Distributed compressed sensing
- Baron, Wakin, et al.
- 2006
(Show Context)
Citation Context ...space H: ̂s = argmin ‖s ′ ‖0 such that Θs ′ = y. (5) It can be shown that with just M = K + 1 iid Gaussian measurements, this optimization will recover a K-sparse signal exactly with high probability =-=[6]-=-. But unfortunately solving (5) is both numerically unstable and an NP-complete problem that requires an exhaustive enumeration of all ) possible combinations for the locations of the nonzero entries ... |

129 |
On the reconstruction of block-sparse signals with an optimal number of measurements
- Stojnic, Parvaresh, et al.
- 2009
(Show Context)
Citation Context ...ential performance gains on a treecompressible, piecewise smooth signal. The second structured sparsity model accounts for the fact that the large coefficients of many sparse signals cluster together =-=[8, 9]-=-. Such a so-called block sparse model is equivalent to a joint sparsity model for an ensemble of J, length-N signals [10], where the supports of the signals’ large coefficients are shared across the e... |

108 | Sampling theorems for signals from the union of finitedimensional linear subspaces
- Blumensath, Davies
- 2009
(Show Context)
Citation Context ...tured sparsity models for K-sparse signals and make precise how the structure reduces the number of potential sparse signal supports in α. Then using the model-based restricted isometry property from =-=[6, 7]-=-, we prove that such structured sparse signals can be robustly recovered from noisy compressive measurements. Moreover, we quantify the required number of measurements M and show that for some structu... |

101 | Compressive radar imaging
- Baraniuk, Steeghs
- 2007
(Show Context)
Citation Context ...coverage it is difficult, but very desirable, to achieve high resolution in a conventional SAR with a single pass observation. The development of compressive sensing (CS) and its application to radar =-=[3]-=-, [4] can be used to improve this trade-off. Using CS-based theory and methods, signals can be reconstructed robustly using fewer measurements than their Nyquist sampling rate. In SAR systems, this tr... |

96 | Uniform uncertainty principle for bernoulli and subgaussian ensembles. Constructive Approximation 28
- MENDELSON, PAJOR, et al.
- 2008
(Show Context)
Citation Context ... subgaussian if there exists c > 0 such that E ` e Xt ´ ≤ e c2 t 2 /2 for all t ∈ R. Examples include the Gaussian, Bernoulli, and Rademacher random variables, as well as any bounded random variable. =-=[25]-=- 9This corresponds to a linear program that can be solved in polynomial time [2, 3]. Adaptations to deal with additive noise in y or x include basis pursuit with denoising (BPDN) [26], complexitybase... |

89 | Exploiting structure in Wavelet-based Bayesian compressive sampling - He, Carin - 2009 |

83 | Subspace pursuit for compressed sensing: Closing the gap between performance and complexity
- Dai, Milenkovich
(Show Context)
Citation Context ...rough an iterative, greedy search. Algorithms such as matching pursuit, orthogonal matching pursuit [29], StOMP [30], iterative hard thresholding (IHT) [12–16], CoSaMP [11], and Subspace Pursuit (SP) =-=[31]-=- all revolve around a best L-term approximation for the estimated signal, with L varying for each algorithm; typically L is O (K). D. Performance bounds on signal recovery Given M = O (K log(N/K)) com... |

76 | and best-ortho-basis: a connection
- Donoho
- 1997
(Show Context)
Citation Context ...e < K and computationally efficient to grow the optimal trees of size > K [33]. The constrained optimization (16) can be rewritten as an unconstrained problem by introducing the Lagrange multiplier λ =-=[40]-=-: min ¯x∈ ¯ ‖x − ¯x‖ T 2 2 + λ(‖¯α‖0 − K), where T = ∪ N n=1 Tn and ¯α are the wavelet coefficients of ¯x. Except for the inconsequential λK term, this optimization coincides with Donoho’s complexity ... |

74 | A signal-dependent time-frequency representation: Fast algorithm for optimal kernel design
- Baraniuk, Jones
- 1994
(Show Context)
Citation Context ... N) computations. However, once the CSSA grows the optimal tree of size K, it is trivial to determine the optimal trees of size < K and computationally efficient to grow the optimal trees of size > K =-=[33]-=-. The constrained optimization (16) can be rewritten as an unconstrained problem by introducing the Lagrange multiplier λ [40]: min ¯x∈ ¯ ‖x − ¯x‖ T 2 2 + λ(‖¯α‖0 − K), where T = ∪ N n=1 Tn and ¯α are... |

58 | Analog-toinformation conversion via random demodulation
- Kirolos, Laska, et al.
- 2006
(Show Context)
Citation Context ...dom, and we can apply the above theory to measure it at a rate below Nyquist. An example of a practical analog compressive sensing system — a so-called “analog-to-information” converter — is given in =-=[7]-=-; there are interesting connections to [8]. 6(a) (b) (c) Figure 2: (a) A sparse vector s lies on a K-dimensional hyperplane aligned with the coordinate axes in R N and thus close to the axes. (b) Com... |

58 | Sparse signal recovery using Markov random fields
- Cevher, Duarte, et al.
- 2008
(Show Context)
Citation Context ...ew CS recovery algorithm that promotes structure in the sparse representation by tailoring the recovered signal according to a sparsity-promoting probabilistic model, such as an Ising graphical model =-=[5]-=-. Such probabilistic models favor certain configurations for the magnitudes and indices of the significant coefficients of the signal. In this paper, we expand on this concept by introducing a model-b... |

50 | Fast reconstruction of piecewise smooth signals from random projections - Duarte, Wakin, et al. - 2005 |

50 | Tree approximation and optimal encoding
- Cohen, Dahmen, et al.
- 2001
(Show Context)
Citation Context ...oth signal regions will give rise to regions of small wavelet coefficients. This “connected tree” property has been well-exploited in a number of wavelet-based processing [17, 36, 37] and compression =-=[38, 39]-=- algorithms. In this section, we will specialize the theory developed in Sections III and IV to a connected tree model T . A set of wavelet coefficients Ω forms a connected subtree if, whenever a coef... |

49 |
Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementations
- Cumming, Wong
- 2005
(Show Context)
Citation Context ...er area at much higher resolution by steering the antenna to always point to and illuminate the same spot in the ground. The trade-off between the two can be adjusted using sliding spotlight-mode SAR =-=[1]-=-, [2]. An area larger than in stripmap-mode can be imaged using scan-mode SAR, in which the antenna is periodically steered to scan spots of different swaths, yielding a large multi-swath area at the ... |

44 | Robust recovery of signals from a union of subspaces,” 2008
- Eldar, Mishali
(Show Context)
Citation Context ...ogramming, and (d) the wavelet tree-based CoSaMP algorithm from Section V. In all figures, root mean-squared error (RMSE) values are normalized with respect to the ℓ2 norm of the signal. ter together =-=[7, 8]-=-. Such a so-called block sparse model is equivalent to a joint sparsity model for an ensemble of J, length-N signals [9], where the supports of the signals’ large coefficients are shared across the en... |

41 |
Sparsity and compressed sensing in radar imaging
- Potter, Ertin, et al.
- 2010
(Show Context)
Citation Context ...age it is difficult, but very desirable, to achieve high resolution in a conventional SAR with a single pass observation. The development of compressive sensing (CS) and its application to radar [3], =-=[4]-=- can be used to improve this trade-off. Using CS-based theory and methods, signals can be reconstructed robustly using fewer measurements than their Nyquist sampling rate. In SAR systems, this transla... |

36 | Wavelet-domain compressive signal reconstruction using a hidden Markov tree model - Duarte, Wakin, et al. - 2008 |

31 | Sampling signals from a union of subspaces - Lu, Do - 2008 |

30 | Recovery of clustered sparse signals from compressive measurements
- Cevher, Indyk, et al.
- 2009
(Show Context)
Citation Context ...embles using fewer measurements than the number required when each signal is recovered independently. Additional structured sparsity models have been developed using our general framework in [43] and =-=[44]-=-. There are many avenues for future work on model-based CS. We have only considered the recovery of signals from models that can be geometrically described as a union of subspaces; possible extensions... |

29 |
The JohnsonLindenstrauss lemma meets compressed
- Baraniuk, Davenport, et al.
(Show Context)
Citation Context ...ng Φ as a random matrix. For example, we draw the matrix elements φ j,i as independent and identically distributed (iid) random variables from a zero-mean, 1/N-variance Gaussian density (white noise) =-=[1, 2, 5]-=-. Then, the measurements y are merely M different randomly weighted linear combinations of the elements of x (recall Figure 1(a) and note the random structure of Φ). A Gaussian Φ has two interesting a... |

28 | Recovery of jointly sparse signals from few random projections
- Wakin, Sarvotham, et al.
- 2005
(Show Context)
Citation Context ...sly studied in CS applications, 30including DNA microarrays and magnetoencephalography [8, 9]. An equivalent problem arises in CS for signal ensembles, such as sensor networks and MIMO communication =-=[9, 10, 41]-=-. In this case, several signals share a common coefficient support set. For example, when a frequencysparse acoustic signal is recorded by an array of microphones, then all of the recorded signals con... |

22 | Optimal tree approximation with wavelets
- Baraniuk
- 1999
(Show Context)
Citation Context ...tion classes contain signals whose wavelet coefficients have a loose (and possibly interrupted) decay from coarse to fine scales. These classes have been well-characterized for wavelet-sparse signals =-=[34, 35, 39]-=- and are intrinsically linked with the Besov spaces B s q (Lp([0, 1])). Besov spaces contain functions of one or more continuous variables that have (roughly speaking) s derivatives in Lp([0, 1]); the... |

22 |
On random binary trees
- Brown, Shubert
- 1984
(Show Context)
Citation Context ...2 ≤ 2−i‖x‖2 + 15‖n‖2. This completes the proof of Theorem 4. □ 44APPENDIX E PROOF OF PROPOSITION 1 When K < log 2 N, the number of subtrees of size K of a binary tree of size N is the Catalan number =-=[47]-=- TK,N = 1 ( ) 2K ≤ K + 1 K (2e)K K + 1 , using Stirling’s approximation. When K > log 2 N, we partition this count of subtrees into the numbers of subtrees tK,h of size K and height h, to obtain TK,N ... |

20 | Cevher V.: Compressive Sensing Recovery of Spike Trains Using a Structured Sparsity Model
- Hegde, Duarte
(Show Context)
Citation Context ...ignal ensembles using fewer measurements than the number required when each signal is recovered independently. Additional structured sparsity models have been developed using our general framework in =-=[43]-=- and [44]. There are many avenues for future work on model-based CS. We have only considered the recovery of signals from models that can be geometrically described as a union of subspaces; possible e... |

19 |
A compressed sensing camera: New theory and an implementation using digital micromirrors
- Takhar, Bansal, et al.
- 2006
(Show Context)
Citation Context ...e solution s. 7 Practical Example Consider the “single-pixel” compressive digital camera of Figure 3(a) that directly acquires M random linear measurements without first collecting the N pixel values =-=[9]-=-. The incident lightfield corresponding to the desired image x is not focused onto a CCD or CMOS sampling array but rather reflected off a digital micromirror device (DMD) consisting of an array of N ... |

19 | Tree-based orthogonal matching pursuit algorithm for signal reconstruction - La, Do - 2006 |

17 | Sampling theorems for signals from the union of linear subspaces,” submitted to
- Blumensath, Davies
- 2008
(Show Context)
Citation Context ...se signals and make precise how the structure encoded in a signal model reduces the number of potential sparse signal supports in α. Then using the model-based restricted isometry property (RIP) from =-=[5, 6]-=-, we prove that such model-sparse signals can be robustly recovered from noisy compressive measurements. Moreover, we quantify the required number of measurements M and show that for some models M is ... |

14 | Model-based compressive Sensing for Signal Ensembles - Duarte, Cevher, et al. - 2009 |

8 |
and best-ortho-basis : A connection. The Annals of Statistics 25
- Donoho
- 1997
(Show Context)
Citation Context ...e < K and computationally efficient to grow the optimal trees of size > K [26]. The constrained optimization (16) can be rewritten as an unconstrained problem by introducing the Lagrange multiplier λ =-=[33]-=-: min ¯x∈ ¯ ‖x − ¯x‖ T 2 2 + λ(‖¯α‖0 − K), where ¯ T = ∪ N n=1 Tn and ¯α are the wavelet coefficients of ¯x. Except for the inconsequential λK term, this optimization coincides with Donoho’s complexit... |

6 | Near best tree approximation
- Baraniuk, DeVore, et al.
- 2002
(Show Context)
Citation Context ...the largest energy and adds the subtree corresponding to the node’s energy to the estimated support as a supernode: a single node that provides a condensed representation of the corresponding subtree =-=[35]-=-. Condensing a large coefficient far down the tree accounts for the potentially large cost (in terms of the total budget of tree nodes K) of growing the tree to that point. Since the first step of the... |

4 | High resolution SAR imaging using random pulse timing - Liu, Boufounos - 2011 |

3 | Random steerable arrays for synthetic aperture imaging
- Liu, Boufounos
- 2013
(Show Context)
Citation Context ... and sliding spotlight modes enables significant increase in the area covered without compromising resolution or, alternatively, significant increase in resolution without compromising range coverage =-=[5]-=-–[7]. In this paper, we extend and generalize our earlier work to scan mode SAR, aiming to increase the imaging resolution while maintaining its large coverage using CS-based techniques. In [5], [7] w... |

3 | Tree-based majorize-minimize algorithm for compressed sensing with sparse-tree prior - Do, La - 2007 |

2 | Iterative threhsolding for sparse approximations - Blumensath, Davies - 2008 |

2 |
Selecting good Fourier measurements for compressed sensing
- Lee, Bresler
- 2008
(Show Context)
Citation Context ...tructured sparse signal recovery algorithms [6–8, 18–23]; however, their approaches have either been ad hoc or focused on a single structured sparsity model. Most previous work on unions of subspaces =-=[6, 7, 24]-=- has focused exclusively on strictly sparse signals and has considered neither compressibility nor feasible recovery algorithms. A related CS modeling framework for structured sparse signals [9] colle... |

2 | Synthetic aperture imaging using a randomly steered spotlight
- Liu, Boufounos
- 2013
(Show Context)
Citation Context ... sliding spotlight modes enables significant increase in the area covered without compromising resolution or, alternatively, significant increase in resolution without compromising range coverage [5]–=-=[7]-=-. In this paper, we extend and generalize our earlier work to scan mode SAR, aiming to increase the imaging resolution while maintaining its large coverage using CS-based techniques. In [5], [7] we co... |

1 | Fast reconstruction from incoherent projections - Baraniuk - 2005 |