Results 1  10
of
192
Compressive Sensing and Structured Random Matrices
 RADON SERIES COMP. APPL. MATH XX, 1–95 © DE GRUYTER 20YY
, 2011
"... These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to ..."
Abstract

Cited by 157 (18 self)
 Add to MetaCart
These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to providing conditions that ensure exact or approximate recovery of sparse vectors using ℓ1minimization.
ModifiedCS: Modifying compressive sensing for problems with partially known support
 in Proc. IEEE Int. Symp. Inf. Theory (ISIT), 2009
"... Abstract—We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some errors. The “known ” part of the support, denoted, may be available from prior knowledge. Alternatively, in a ..."
Abstract

Cited by 126 (33 self)
 Add to MetaCart
(Show Context)
Abstract—We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some errors. The “known ” part of the support, denoted, may be available from prior knowledge. Alternatively, in a problem of recursively reconstructing time sequences of sparse spatial signals, one may use the support estimate from the previous time instant as the “known ” part. The idea of our proposed solution (modifiedCS) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and is sparsest outside of. We obtain sufficient conditions for exact reconstruction using modifiedCS. These are much weaker than those needed for compressive sensing (CS) when the sizes of the unknown part of the support and of errors in the known part are small compared to the support size. An important extension called regularized modifiedCS (RegModCS) is developed which also uses prior signal estimate knowledge. Simulation comparisons for both sparse and compressible signals are shown. Index Terms—Compressive sensing, modifiedCS, partially known support, prior knowledge, sparse reconstruction.
A Probabilistic and RIPless Theory of Compressed Sensing
, 2010
"... This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models — e.g. Gaussian, frequency measurements — discussed in the literature, ..."
Abstract

Cited by 95 (3 self)
 Add to MetaCart
(Show Context)
This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models — e.g. Gaussian, frequency measurements — discussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution F obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP) — they make use of a much weaker notion — or a random model for the signal. As an example, the paper shows that a signal with s nonzero entries can be faithfully recovered from about s log n Fourier coefficients that are contaminated with noise.
Shifting Inequality and Recovery of Sparse Signals
 IEEE Transactions on Signal Processing
"... Abstract—In this paper, we present a concise and coherent analysis of the constrained `1 minimization method for stable recovering of highdimensional sparse signals both in the noiseless case and noisy case. The analysis is surprisingly simple and elementary, while leads to strong results. In parti ..."
Abstract

Cited by 64 (12 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we present a concise and coherent analysis of the constrained `1 minimization method for stable recovering of highdimensional sparse signals both in the noiseless case and noisy case. The analysis is surprisingly simple and elementary, while leads to strong results. In particular, it is shown that the sparse recovery problem can be solved via `1 minimization under weaker conditions than what is known in the literature. A key technical tool is an elementary inequality, called Shifting Inequality, which, for a given nonnegative decreasing sequence, bounds the `2 norm of a subsequence in terms of the `1 norm of another subsequence by shifting the elements to the upper end. Index Terms — 1 minimization, restricted isometry property, shifting inequality, sparse recovery. I.
Dequantizing compressed sensing: When oversampling and nongaussian constraints combine
, 2009
"... ..."
Precise Undersampling Theorems
"... Undersampling Theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interest – provided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruc ..."
Abstract

Cited by 60 (4 self)
 Add to MetaCart
Undersampling Theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interest – provided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruct with a particular nonlinear procedure. While there are many ways to crudely demonstrate such undersampling phenomena, we know of only one approach which precisely quantifies the true sparsityundersampling tradeoff curve of standard algorithms and standard compressed sensing matrices. That approach, based on combinatorial geometry, predicts the exact location in sparsityundersampling domain where standard algorithms exhibit phase transitions in performance. We review the phase transition approach here and describe the broad range of cases where it applies. We also mention exceptions and state challenge problems for future research. Sample result: one can efficiently reconstruct a ksparse signal of length N from n measurements, provided n � 2k · log(N/n), for (k, n, N) large, k ≪ N.
Circulant and Toeplitz Matrices in Compressed Sensing
"... Compressed sensing seeks to recover a sparse vector from a small number of linear and nonadaptive measurements. While most work so far focuses on Gaussian or Bernoulli random measurements we investigate the use of partial random circulant and Toeplitz matrices in connection with recovery by ℓ1mini ..."
Abstract

Cited by 54 (10 self)
 Add to MetaCart
(Show Context)
Compressed sensing seeks to recover a sparse vector from a small number of linear and nonadaptive measurements. While most work so far focuses on Gaussian or Bernoulli random measurements we investigate the use of partial random circulant and Toeplitz matrices in connection with recovery by ℓ1minization. In contrast to recent work in this direction we allow the use of an arbitrary subset of rows of a circulant and Toeplitz matrix. Our recovery result predicts that the necessary number of measurements to ensure sparse reconstruction by ℓ1minimization with random partial circulant or Toeplitz matrices scales linearly in the sparsity up to a logfactor in the ambient dimension. This represents a significant improvement over previous recovery results for such matrices. As a main tool for the proofs we use a new version of the noncommutative Khintchine inequality.
New bounds for restricted isometry constants
 LINGCHEN KONG, LEVENT TUNÇEL, NAIHUA XIU
, 2010
"... Abstract—This paper discusses new bounds for restricted isometry constants in compressed sensing. Let 8 be an n2p real matrix and k be a positive integer with k n. One of the main results of this paper shows that if the restricted isometry constant k of 8 satisfies k < 0:307 then ksparse signals ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
(Show Context)
Abstract—This paper discusses new bounds for restricted isometry constants in compressed sensing. Let 8 be an n2p real matrix and k be a positive integer with k n. One of the main results of this paper shows that if the restricted isometry constant k of 8 satisfies k < 0:307 then ksparse signals are guaranteed to be recovered exactly via `1 minimization when no noise is present and ksparse signals can be estimated stably in the noisy case. It is also shown that the bound cannot be substantially improved. An explicit example is constructed in which k = k01 < 0:5, but it is impossible to 2k01 recover certain ksparse signals. Index Terms—Compressed sensing, 1 minimization, restricted isometry property, sparse signal recovery. I.
Compressed sensing: how sharp is the restricted isometry property?
, 2009
"... Compressed sensing is a recent technique by which signals can be measured at a rate proportional to their information content, combining the important task of compression directly into the measurement process. Since its introduction in 2004 there have been hundreds of manuscripts on compressed sens ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
(Show Context)
Compressed sensing is a recent technique by which signals can be measured at a rate proportional to their information content, combining the important task of compression directly into the measurement process. Since its introduction in 2004 there have been hundreds of manuscripts on compressed sensing, a large fraction of which have focused on the design and analysis of algorithms to recover a signal from its compressed measurements. The Restricted Isometry Property (RIP) has become a ubiquitous property assumed in their analysis. We present the best known bounds on the RIP, and in the process illustrate the way in which the combinatorial nature of compressed sensing is controlled. Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners.
Sparse unmixing of hyperspectral data
 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
, 2011
"... Linear spectral unmixing is a popular tool in remotely sensed hyperspectral data interpretation. It aims at estimating the fractional abundances of pure spectral signatures (also called as endmembers) in each mixed pixel collected by an imaging spectrometer. In many situations, the identification o ..."
Abstract

Cited by 51 (15 self)
 Add to MetaCart
(Show Context)
Linear spectral unmixing is a popular tool in remotely sensed hyperspectral data interpretation. It aims at estimating the fractional abundances of pure spectral signatures (also called as endmembers) in each mixed pixel collected by an imaging spectrometer. In many situations, the identification of the endmember signatures in the original data set may be challenging due to insufficient spatial resolution, mixtures happening at different scales, and unavailability of completely pure spectral signatures in the scene. However, the unmixing problem can also be approached in semisupervised fashion, i.e., by assuming that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance (e.g., spectra collected on the ground by a field spectroradiometer). Unmixing then amounts to finding the optimal subset of signatures in a (potentially very large) spectral library that can best model