Results 1 
4 of
4
Robust Uncertainty Principles: Exact Signal Reconstruction From Highly Incomplete Frequency Information
, 2006
"... This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discretetime signal and a randomly chosen set of frequencies. Is it possible to reconstruct from the partial knowledge of its Fourier coefficients on the set? A typical result of this pa ..."
Abstract

Cited by 2632 (50 self)
 Add to MetaCart
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discretetime signal and a randomly chosen set of frequencies. Is it possible to reconstruct from the partial knowledge of its Fourier coefficients on the set? A typical result of this paper is as follows. Suppose that is a superposition of spikes @ Aa @ A @ A obeying @�� � A I for some constant H. We do not know the locations of the spikes nor their amplitudes. Then with probability at least I @ A, can be reconstructed exactly as the solution to the I minimization problem I aH @ A s.t. ” @ Aa ” @ A for all
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Nearly optimal signal recovery from random projections: Universal encoding strategies?
 IEEE TRANS. INFO. THEORY
, 2006
"... Suppose we are given a vector f in a class F, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision in the Euclidean (`2) metric? This paper shows that if the objects of interest are sparse in a fixed ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Suppose we are given a vector f in a class F, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision in the Euclidean (`2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector jfj (or of its coefficients in a fixed basis) obeys jfj(n) R 1 n01=p, where R>0 and p>0. Suppose that we take measurements yk = hf; Xki;k =1;...;K, where the Xk are Ndimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0 < p < 1 and with overwhelming probability, our reconstruction f] , defined as the solution to the constraints
Stable Signal Recovery from Incomplete and Inaccurate Measurements
, 2005
"... Suppose we wish to recover a vector x0 ∈ R m (e.g. a digital signal or image) from incomplete and contaminated observations y = Ax0 + e; A is a n by m matrix with far fewer rows than columns (n ≪ m) and e is an error term. Is it possible to recover x0 accurately based on the data y? To recover x0, w ..."
Abstract
 Add to MetaCart
(Show Context)
Suppose we wish to recover a vector x0 ∈ R m (e.g. a digital signal or image) from incomplete and contaminated observations y = Ax0 + e; A is a n by m matrix with far fewer rows than columns (n ≪ m) and e is an error term. Is it possible to recover x0 accurately based on the data y? To recover x0, we consider the solution x ♯ to the ℓ1regularization problem min �x�ℓ1 subject to �Ax − y�ℓ2 ≤ ɛ, where ɛ is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unitnormed columns) and if the vector x0 is sufficiently sparse, then the solution is within the noise level �x ♯ − x0�ℓ2 ≤ C · ɛ. As a first example, suppose that A is a Gaussian random matrix, then stable recovery occurs for almost all such A’s provided that the number of nonzeros of x0 is of about the same order as the number of observations. As a second instance, suppose one observes few Fourier samples of x0, then stable recovery occurs for almost any set of n coefficients provided that the number of nonzeros is of the order of n/[log m] 6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights on the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.