Results 1  10
of
58
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Compressive sampling
, 2006
"... Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired res ..."
Abstract

Cited by 1441 (15 self)
 Add to MetaCart
Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired resolution of the image, i.e. the number of pixels in the image. This paper surveys an emerging theory which goes by the name of “compressive sampling” or “compressed sensing,” and which says that this conventional wisdom is inaccurate. Perhaps surprisingly, it is possible to reconstruct images or signals of scientific interest accurately and sometimes even exactly from a number of samples which is far smaller than the desired resolution of the image/signal, e.g. the number of pixels in the image. It is believed that compressive sampling has far reaching implications. For example, it suggests the possibility of new data acquisition protocols that translate analog information into digital form with fewer sensors than what was considered necessary. This new sampling theory may come to underlie procedures for sampling and compressing data simultaneously. In this short survey, we provide some of the key mathematical insights underlying this new theory, and explain some of the interactions between compressive sampling and other fields such as statistics, information theory, coding theory, and theoretical computer science.
Decoding by Linear Programming
, 2004
"... This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to rec ..."
Abstract

Cited by 1399 (16 self)
 Add to MetaCart
This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1minimization problem (‖x‖ℓ1:= i xi) min g∈R n ‖y − Ag‖ℓ1 provided that the support of the vector of errors is not too large, ‖e‖ℓ0: = {i: ei ̸= 0}  ≤ ρ · m for some ρ> 0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work [5]. Finally, underlying the success of ℓ1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.
Stable signal recovery from incomplete and inaccurate measurements,”
 Comm. Pure Appl. Math.,
, 2006
"... Abstract Suppose we wish to recover a vector x 0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax 0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x 0 accurately based on the data y? To r ..."
Abstract

Cited by 1397 (38 self)
 Add to MetaCart
(Show Context)
Abstract Suppose we wish to recover a vector x 0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax 0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x 0 accurately based on the data y? To recover x 0 , we consider the solution x to the 1 regularization problem where is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unitnormed columns) and if the vector x 0 is sufficiently sparse, then the solution is within the noise level As a first example, suppose that A is a Gaussian random matrix; then stable recovery occurs for almost all such A's provided that the number of nonzeros of x 0 is of about the same order as the number of observations. As a second instance, suppose one observes few Fourier samples of x 0 ; then stable recovery occurs for almost any set of n coefficients provided that the number of nonzeros is of the order of n/(log m) 6 . In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights into the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.
The Dantzig selector: statistical estimation when p is much larger than n
, 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract

Cited by 879 (14 self)
 Add to MetaCart
In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ p, and the zi’s are i.i.d. N(0, σ 2). Is it possible to estimate x reliably based on the noisy data y? To estimate x, we introduce a new estimator—we call the Dantzig selector—which is solution to the ℓ1regularization problem min ˜x∈R p ‖˜x‖ℓ1 subject to ‖A T r‖ℓ ∞ ≤ (1 + t −1) √ 2 log p · σ, where r is the residual vector y − A˜x and t is a positive scalar. We show that if A obeys a uniform uncertainty principle (with unitnormed columns) and if the true parameter vector x is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability ‖ˆx − x ‖ 2 ℓ2 ≤ C2 ( · 2 log p · σ 2 + ∑ min(x 2 i, σ 2) Our results are nonasymptotic and we give values for the constant C. In short, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables, by solving a very simple convex program, which in fact can easily be recast as a convenient linear program (LP).
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract

Cited by 427 (36 self)
 Add to MetaCart
(Show Context)
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
Robust Recovery of Signals From a Structured Union of Subspaces
, 2008
"... Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structu ..."
Abstract

Cited by 221 (47 self)
 Add to MetaCart
(Show Context)
Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this paper we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modelled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a blocksparse vector whose nonzero elements appear in fixed blocks. We then propose a mixed ℓ2/ℓ1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP we also prove stability of our approach in the presence of noise and modeling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.
For most large underdetermined systems of equations, the minimal l1norm nearsolution approximates the sparsest nearsolution
 Comm. Pure Appl. Math
, 2004
"... We consider inexact linear equations y ≈ Φα where y is a given vector in R n, Φ is a given n by m matrix, and we wish to find an α0,ɛ which is sparse and gives an approximate solution, obeying �y − Φα0,ɛ�2 ≤ ɛ. In general this requires combinatorial optimization and so is considered intractable. On ..."
Abstract

Cited by 122 (1 self)
 Add to MetaCart
(Show Context)
We consider inexact linear equations y ≈ Φα where y is a given vector in R n, Φ is a given n by m matrix, and we wish to find an α0,ɛ which is sparse and gives an approximate solution, obeying �y − Φα0,ɛ�2 ≤ ɛ. In general this requires combinatorial optimization and so is considered intractable. On the other hand, the ℓ 1 minimization problem min �α�1 subject to �y − Φα�2 ≤ ɛ, is convex, and is considered tractable. We show that for most Φ the solution ˆα1,ɛ = ˆα1,ɛ(y, Φ) of this problem is quite generally a good approximation for ˆα0,ɛ. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We study the underdetermined case where m ∼ An, A> 1 and prove the existence of ρ = ρ(A) and C> 0 so that for large n, and for all Φ’s except a negligible fraction, the following approximate sparse solution property of Φ holds: For every y having an approximation �y − Φα0�2 ≤ ɛ by a coefficient vector α0 ∈ R m with fewer than ρ · n nonzeros, we have �ˆα1,ɛ − α0�2 ≤ C · ɛ. This has two implications. First: for most Φ, whenever the combinatorial optimization result α0,ɛ would be very sparse, ˆα1,ɛ is a good approximation to α0,ɛ. Second: suppose we are given noisy data obeying y = Φα0 + z where the unknown α0 is known to be sparse and the noise �z�2 ≤ ɛ. For most Φ, noisetolerant ℓ 1minimization will stably recover α0 from y in the presence of noise z. We study also the barelydetermined case m = n and reach parallel conclusions by slightly different arguments. The techniques include the use of almostspherical sections in Banach space theory and concentration of measure for eigenvalues of random matrices.
Error Correction via Linear Programming
, 2005
"... Suppose we wish to transmit a vector f ∈ Rn reliably. A frequently discussed approach consists in encoding f with an m by n coding matrix A. Assume now that a fraction of the entries of Af are corrupted in a completely arbitrary fashion. We do not know which entries are affected nor do we know how t ..."
Abstract

Cited by 107 (7 self)
 Add to MetaCart
Suppose we wish to transmit a vector f ∈ Rn reliably. A frequently discussed approach consists in encoding f with an m by n coding matrix A. Assume now that a fraction of the entries of Af are corrupted in a completely arbitrary fashion. We do not know which entries are affected nor do we know how they are affected. Is it possible to recover f exactly from the corrupted mdimensional vector y? This paper proves that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1minimization problem (�x�ℓ1: = i xi) min �y − Ag�ℓ1 g∈Rn provided that the fraction of corrupted entries is not too large, i.e. does not exceed some strictly positive constant ρ ∗ (numerical values for ρ ∗ are given). In other words, f can be recovered exactly by solving a simple convex optimization problem; in fact, a linear program. We report on numerical experiments suggesting that ℓ1minimization is amazingly effective; f is recovered exactly even in situations where a very significant fraction of the output is corrupted.
The LittlewoodOfford problem and invertibility of random matrices.
 Adv. Math.
, 2008
"... Abstract We prove two basic conjectures on the distribution of the smallest singular value of random n×n matrices with independent entries. Under minimal moment assumptions, we show that the smallest singular value is of order n −1/2 , which is optimal for Gaussian matrices. Moreover, we give a opt ..."
Abstract

Cited by 105 (18 self)
 Add to MetaCart
(Show Context)
Abstract We prove two basic conjectures on the distribution of the smallest singular value of random n×n matrices with independent entries. Under minimal moment assumptions, we show that the smallest singular value is of order n −1/2 , which is optimal for Gaussian matrices. Moreover, we give a optimal estimate on the tail probability. This comes as a consequence of a new and essentially sharp estimate in the LittlewoodOfford problem: for i.i.d. random variables X k and real numbers a k , determine the probability p that the sum k a k X k lies near some number v. For arbitrary coefficients a k of the same order of magnitude, we show that they essentially lie in an arithmetic progression of length 1/p. Published by Elsevier Inc.