Results 1  10
of
59
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Decoding by Linear Programming
, 2004
"... This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to rec ..."
Abstract

Cited by 1399 (16 self)
 Add to MetaCart
This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1minimization problem (‖x‖ℓ1:= i xi) min g∈R n ‖y − Ag‖ℓ1 provided that the support of the vector of errors is not too large, ‖e‖ℓ0: = {i: ei ̸= 0}  ≤ ρ · m for some ρ> 0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work [5]. Finally, underlying the success of ℓ1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.
Error Correction via Linear Programming
, 2005
"... Suppose we wish to transmit a vector f ∈ Rn reliably. A frequently discussed approach consists in encoding f with an m by n coding matrix A. Assume now that a fraction of the entries of Af are corrupted in a completely arbitrary fashion. We do not know which entries are affected nor do we know how t ..."
Abstract

Cited by 107 (7 self)
 Add to MetaCart
Suppose we wish to transmit a vector f ∈ Rn reliably. A frequently discussed approach consists in encoding f with an m by n coding matrix A. Assume now that a fraction of the entries of Af are corrupted in a completely arbitrary fashion. We do not know which entries are affected nor do we know how they are affected. Is it possible to recover f exactly from the corrupted mdimensional vector y? This paper proves that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1minimization problem (�x�ℓ1: = i xi) min �y − Ag�ℓ1 g∈Rn provided that the fraction of corrupted entries is not too large, i.e. does not exceed some strictly positive constant ρ ∗ (numerical values for ρ ∗ are given). In other words, f can be recovered exactly by solving a simple convex optimization problem; in fact, a linear program. We report on numerical experiments suggesting that ℓ1minimization is amazingly effective; f is recovered exactly even in situations where a very significant fraction of the output is corrupted.
Concentration of the Spectral Measure for Large Matrices
 ELECTRONIC COMMUNICATIONS IN PROBABILITY
, 2000
"... We derive concentration inequalities for functions of the empirical measure of eigenvalues for large, random, self adjoint matrices, with not necessarily Gaussian entries. The results presented apply in particular to nonGaussian Wigner and Wishart matrices. We also provide concentration bounds fo ..."
Abstract

Cited by 101 (13 self)
 Add to MetaCart
We derive concentration inequalities for functions of the empirical measure of eigenvalues for large, random, self adjoint matrices, with not necessarily Gaussian entries. The results presented apply in particular to nonGaussian Wigner and Wishart matrices. We also provide concentration bounds for noncommutative functionals of random matrices.
Smallest singular value of random matrices and geometry of random polytopes
 Adv. Math
, 2005
"... geometry of random polytopes ..."
(Show Context)
Nonasymptotic theory of random matrices: extreme singular values
 PROCEEDINGS OF THE INTERNATIONAL CONGRESS OF MATHEMATICIANS
, 2010
"... ..."
Random matrices: The distribution of the smallest singular values
, 2009
"... Let ξ be a realvalued random variable of mean zero and variance 1. Let Mn(ξ) denote the n × n random matrix whose entries are iid copies of ξ and σn(Mn(ξ)) denote the least singular value of Mn(ξ). The quantity σn(Mn(ξ)) 2 is thus the least eigenvalue of the Wishart matrix MnM ∗ n. We show that ( ..."
Abstract

Cited by 47 (8 self)
 Add to MetaCart
(Show Context)
Let ξ be a realvalued random variable of mean zero and variance 1. Let Mn(ξ) denote the n × n random matrix whose entries are iid copies of ξ and σn(Mn(ξ)) denote the least singular value of Mn(ξ). The quantity σn(Mn(ξ)) 2 is thus the least eigenvalue of the Wishart matrix MnM ∗ n. We show that (under a finite moment assumption) the probability distribution nσn(Mn(ξ)) 2 is universal in the sense that it does not depend on the distribution of ξ. In particular, it converges to the same limiting distribution as in the special case when ξ is real gaussian. (The limiting distribution was computed explicitly in this case by Edelman.) We also proved a similar result for complexvalued random variables of mean zero, with real and imaginary parts having variance 1/2 and covariance zero. Similar results are also obtained for the joint distribution of the bottom k singular values of Mn(ξ) for any fixed k (or even for k growing as a small power of n) and for rectangular matrices. Our approach is motivated by the general idea of “property testing ” from combinatorics and theoretical computer science. This seems to be a new approach in the study of spectra of random matrices and combines tools from various areas of mathematics
Random covariance matrices: Universality of local statistics of eigenvalues
"... ar ..."
(Show Context)
Large deviation upper bounds and central limit theorems for band matrices and noncommutative functionnals of Gaussian large random matrices
, 2002
"... ABSTRACT. – We obtain large deviation upper bounds and central limit theorems for noncommutative functionals of large Gaussian band matrices and deterministic diagonal matrices with converging spectral measure. As a consequence, we derive such type of results for the spectral measure of Gaussian ba ..."
Abstract

Cited by 36 (7 self)
 Add to MetaCart
(Show Context)
ABSTRACT. – We obtain large deviation upper bounds and central limit theorems for noncommutative functionals of large Gaussian band matrices and deterministic diagonal matrices with converging spectral measure. As a consequence, we derive such type of results for the spectral measure of Gaussian band matrices and Gaussian sample covariance matrices. 2002 Éditions scientifiques et médicales Elsevier SAS AMS classification: 60F10; 15A52; 60F05