#### DMCA

## For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1-norm Solution is also the Sparsest Solution (2004)

Venue: | Comm. Pure Appl. Math |

Citations: | 567 - 10 self |

### Citations

2718 | Atomic decomposition by basis pursuit
- Chen, Donoho, et al.
- 1999
(Show Context)
Citation Context ...al orthonormal bases by Coifman and collaborators [4, 5] and combinations of several frames in by Mallat and Zhang’s work on Matching Pursuit [19], and by Chen, Donoho, and Saunders in the mid 1990’s =-=[3]-=-. A theoretical perspective showing that there is a sound mathematical basis for overcomplete representation has come together rapidly in recent years, see [7, 8, 12, 14, 16, 26, 27]. An early 2sresul... |

2621 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Candes, Romberg, et al.
- 2006
(Show Context)
Citation Context ...ut n/5 nonzeros in a 2-fold overcomplete representation. Hence, empirically, even a mildly sparse representation could be exactly recovered by ℓ 1 optimization. Very recently, Candès, Romberg and Tao =-=[2]-=- showed that for partial Fourier systems, formed by taking n rows at random from an m-by-m standard Fourier matrix, the resulting n by m matrix with overwhelming probability allowed exact equivalence ... |

1668 | Matching pursuits with time-frequency dictionaries
- Mallat, Zhang
- 1993
(Show Context)
Citation Context ...ained using in the early 1990’s, eg. combinations of several orthonormal bases by Coifman and collaborators [4, 5] and combinations of several frames in by Mallat and Zhang’s work on Matching Pursuit =-=[19]-=-, and by Chen, Donoho, and Saunders in the mid 1990’s [3]. A theoretical perspective showing that there is a sound mathematical basis for overcomplete representation has come together rapidly in recen... |

911 | Greed is good: Algorithmic results for sparse approximation
- Tropp
(Show Context)
Citation Context ...asurements, store less data, or investigate fewer genes. The search for sparse solutions can transform the problem completely, in many cases making unique solution possible (Lemma 2.1 below, see also =-=[7, 8, 16, 14, 26, 27]-=-). Unfortunately, this only seems to change the problem from an impossible one to an intractable one! Finding the sparsest solution to an general underdetermined system of equations is NP-hard [21]; m... |

594 |
The concentration of measure phenomenon,
- Ledoux
- 2001
(Show Context)
Citation Context ...bles, Lipschitz with respect to the standard Euclidean metric, with Lipschitz constant 1/ √ n. Moreover maxi Ri itself is such a Lipschitz function. By concentration of measure for Gaussian variables =-=[18]-=-, (3.3) follows. The proof of Lemma 3.4 depends on the observation – see Szarek [25], Davidson-Szarek [6] or El Karoui [13] – that the singular values of Gaussian matrices obey concentration of measur... |

556 |
Sparse approximate solutions to linear systems
- Natarajan
- 1995
(Show Context)
Citation Context ...26, 27]). Unfortunately, this only seems to change the problem from an impossible one to an intractable one! Finding the sparsest solution to an general underdetermined system of equations is NP-hard =-=[21]-=-; many classic combinatorial optimization problems can be cast in that form. In this paper we will see that for ‘most’ underdetermined systems of equations, when a sufficiently sparse solution exists,... |

440 |
Eigenvalues and condition numbers of random matrices
- Edelman
- 1989
(Show Context)
Citation Context ... GI = Φ T I ΦI with smallest eigenvalue. This eigenvector will be a random uniform point on S k−1 and so �δI�1 = � 2 � |I|�δI�2(1 + op(1)). π 24sIt generates vI = ΦIδI with Letting ρ = |I|/n, we have =-=[11]-=- �vI�2 = λ 1/2 min �δI�2. λmin = (1 − ρ 1/2 ) 2 · (1 + op(1)). Now vI is a random point on S n−1 , independent of φi for i ∈ I c . Considering the program min �δIc�1 subject to ΦIcδI c = −vI we see th... |

438 |
The volume of convex bodies and Banach space geometry.
- Pisier
- 1989
(Show Context)
Citation Context ... we have for n > n0 that which implies an n1 so that QED 4 Almost-Spherical Sections log(P (Ω c n,m,ρ1,λ )) ≤ AnH(ρ1/A)(1 + o(1)) − βn, P (Ω c n,m,ρ,λ ) ≤ exp(−β1n), n > n1(ρ, λ). Dvoretsky’s theorem =-=[10, 22]-=- says that every infinite-dimensional Banach space contains very high-dimensional subspaces on which the Banach norm is nearly proportional to the Euclidean norm. This is called the spherical sections... |

336 |
Sparse representations in unions of bases
- Gribonval, Nielsen
- 2003
(Show Context)
Citation Context ...asurements, store less data, or investigate fewer genes. The search for sparse solutions can transform the problem completely, in many cases making unique solution possible (Lemma 2.1 below, see also =-=[7, 8, 16, 14, 26, 27]-=-). Unfortunately, this only seems to change the problem from an impossible one to an intractable one! Finding the sparsest solution to an general underdetermined system of equations is NP-hard [21]; m... |

331 | On projection algorithms for solving convex feasibility problems.
- Bauschke, Borwein
- 1996
(Show Context)
Citation Context ... to create a dual feasible point y starting from a nearby almostfeasible point y0. It is an instance of the successive projection method for finding feasible points for systems of linear inequalities =-=[1]-=-. Let I0 be the collection of indices 1 ≤ i ≤ m with and then set |〈φi, y0〉| > 1/2, y1 = y0 − PI0 y0, where PI0 denotes the least-squares projector ΦI0 (ΦT I0ΦI0 )−1ΦT . In effect, we identify the ind... |

252 | On sparse representations in arbitrary redundant bases
- Fuchs
(Show Context)
Citation Context ...asurements, store less data, or investigate fewer genes. The search for sparse solutions can transform the problem completely, in many cases making unique solution possible (Lemma 2.1 below, see also =-=[7, 8, 16, 14, 26, 27]-=-). Unfortunately, this only seems to change the problem from an impossible one to an intractable one! Finding the sparsest solution to an general underdetermined system of equations is NP-hard [21]; m... |

215 |
Asymptotic theory of finite-dimensional normed spaces. With an appendix by M
- Milman, Schechtman
- 1986
(Show Context)
Citation Context ...I for |I| < ρn affords a spherical section of the ℓ 1 n ball. The basic argument we use derives from refinements of Dvoretsky’s theorem in Banach space theory, going back to work of Milman and others =-=[15, 24, 20]-=- Definition 4.1 Let |I| = k. We say that ΦI offers an ɛ-isometry between ℓ 2 (I) and ℓ 1 n if (1 − ɛ) · �α�2 ≤ Remarks: 1. The scale factor � π � π 2n · �ΦIα�1 ≤ (1 + ɛ) · �α�2, ∀α ∈ R k . (4.1) embed... |

208 |
Local operator theory, random matrices and Banach spaces.
- Davidson, Szarek
- 2001
(Show Context)
Citation Context ...xi Ri itself is such a Lipschitz function. By concentration of measure for Gaussian variables [18], (3.3) follows. The proof of Lemma 3.4 depends on the observation – see Szarek [25], Davidson-Szarek =-=[6]-=- or El Karoui [13] – that the singular values of Gaussian matrices obey concentration of measure: Lemma 3.5 Let X be an n by k matrix of iid N(0, 1 n ) Gaussians, k < n. Let sℓ(X) denote the ℓ-th larg... |

205 | Empirical Processes: Theory and Applications, - Pollard - 1990 |

133 |
Entropy based algorithms for best basis selection.
- Coifman, Wickerhauser
- 1992
(Show Context)
Citation Context ...or this viewpoint were first obtained empirically, where representations of signals were obtained using in the early 1990’s, eg. combinations of several orthonormal bases by Coifman and collaborators =-=[4, 5]-=- and combinations of several frames in by Mallat and Zhang’s work on Matching Pursuit [19], and by Chen, Donoho, and Saunders in the mid 1990’s [3]. A theoretical perspective showing that there is a s... |

103 | Just relax : Convex programming methods for subset selection and sparse approximation,”
- Tropp
- 2006
(Show Context)
Citation Context ...e stepwise regression; the same procedure is called Orthogonal Matching Pursuit in signal analysis, and called greedy approximation in the approximation theory literature. For further discussion, see =-=[9, 26, 27]-=-. Under sufficiently strong conditions, both methods can work. Theorem 3 (Tropp [26]) Suppose that the dictionary Φ has coherence M = maxi�=j |〈φi, φj〉|. Suppose that α0 has k ≤ M −1 /2 nonzeros, and ... |

26 | Optimally sparse representation from overcomplete dictionaries via ℓ 1 -norm minimization - D, Elad - 2003 |

13 |
Random embeddings of Euclidean spaces in sequence spaces
- Schechtman
- 1981
(Show Context)
Citation Context ...I for |I| < ρn affords a spherical section of the ℓ 1 n ball. The basic argument we use derives from refinements of Dvoretsky’s theorem in Banach space theory, going back to work of Milman and others =-=[15, 24, 20]-=- Definition 4.1 Let |I| = k. We say that ΦI offers an ɛ-isometry between ℓ 2 (I) and ℓ 1 n if (1 − ɛ) · �α�2 ≤ Remarks: 1. The scale factor � π � π 2n · �ΦIα�1 ≤ (1 + ɛ) · �α�2, ∀α ∈ R k . (4.1) embed... |

8 |
Spaces with large distance to ℓ n ∞ and random matrices
- Szarek
- 1990
(Show Context)
Citation Context ...nt 1/ √ n. Moreover maxi Ri itself is such a Lipschitz function. By concentration of measure for Gaussian variables [18], (3.3) follows. The proof of Lemma 3.4 depends on the observation – see Szarek =-=[25]-=-, Davidson-Szarek [6] or El Karoui [13] – that the singular values of Gaussian matrices obey concentration of measure: Lemma 3.5 Let X be an n by k matrix of iid N(0, 1 n ) Gaussians, k < n. Let sℓ(X)... |

7 |
Xiaoming (2001) Uncertainty Principles and Ideal Atomic Decomposition
- Donoho, Huo
- 2001
(Show Context)
Citation Context |

7 |
Some results on convex bodies and Banach spaces. In:
- Dvoretsky
- 1961
(Show Context)
Citation Context ... we have for n > n0 that which implies an n1 so that QED 4 Almost-Spherical Sections log(P (Ω c n,m,ρ1,λ )) ≤ AnH(ρ1/A)(1 + o(1)) − βn, P (Ω c n,m,ρ,λ ) ≤ exp(−β1n), n > n1(ρ, λ). Dvoretsky’s theorem =-=[10, 22]-=- says that every infinite-dimensional Banach space contains very high-dimensional subspaces on which the Banach norm is nearly proportional to the Euclidean norm. This is called the spherical sections... |

6 |
2002) A generalized uncertainty principle and sparse representations in pairs of bases
- Elad, Bruckstein
(Show Context)
Citation Context ..., Donoho, and Saunders in the mid 1990’s [3]. A theoretical perspective showing that there is a sound mathematical basis for overcomplete representation has come together rapidly in recent years, see =-=[7, 8, 12, 14, 16, 26, 27]-=-. An early 2sresult was the following: suppose that Φ is the concatenation of two orthobases, so that m = 2n. Suppose that the coherence - the maximal inner product between any pair of columns of Φ - ... |

3 |
The dimension of almost-spherical sections of convex bodies
- Figiel, Lindenstrauss, et al.
- 1977
(Show Context)
Citation Context ...I for |I| < ρn affords a spherical section of the ℓ 1 n ball. The basic argument we use derives from refinements of Dvoretsky’s theorem in Banach space theory, going back to work of Milman and others =-=[15, 24, 20]-=- Definition 4.1 Let |I| = k. We say that ΦI offers an ɛ-isometry between ℓ 2 (I) and ℓ 1 n if (1 − ɛ) · �α�2 ≤ Remarks: 1. The scale factor � π � π 2n · �ΦIα�1 ≤ (1 + ɛ) · �α�2, ∀α ∈ R k . (4.1) embed... |

2 |
Karoui (2004) New Results About Random Covariance Matrices and Statistical Applications
- El
(Show Context)
Citation Context ...uch a Lipschitz function. By concentration of measure for Gaussian variables [18], (3.3) follows. The proof of Lemma 3.4 depends on the observation – see Szarek [25], Davidson-Szarek [6] or El Karoui =-=[13]-=- – that the singular values of Gaussian matrices obey concentration of measure: Lemma 3.5 Let X be an n by k matrix of iid N(0, 1 n ) Gaussians, k < n. Let sℓ(X) denote the ℓ-th largest singular value... |

1 |
Embedding ℓm p into ℓn 1
- Johnson, Schechtman
- 1982
(Show Context)
Citation Context ...neous Isometry Our approach is based on a result for individual I, which will later be extended to get a result for every I. This individual result is well known in Banach space theory, going back to =-=[24, 17, 15]-=-. For our proof, we repackage key elements from the proof of Theorem 4.4 in Pisier’s book [22]. Pisier’s argument shows that for one specific I, there is a positive probability that ΦI offers an ɛ-iso... |