Results 1  10
of
13
For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1norm Solution is also the Sparsest Solution
 Comm. Pure Appl. Math
, 2004
"... We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that ..."
Abstract

Cited by 568 (10 self)
 Add to MetaCart
(Show Context)
We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that for large n, and for all Φ’s except a negligible fraction, the following property holds: For every y having a representation y = Φα0 by a coefficient vector α0 ∈ R m with fewer than ρ · n nonzeros, the solution α1 of the ℓ 1 minimization problem min �x�1 subject to Φα = y is unique and equal to α0. In contrast, heuristic attempts to sparsely solve such systems – greedy algorithms and thresholding – perform poorly in this challenging setting. The techniques include the use of random proportional embeddings and almostspherical sections in Banach space theory, and deviation bounds for the eigenvalues of random Wishart matrices.
Stable Distributions, Pseudorandom Generators, Embeddings and Data Stream Computation
, 2000
"... In this paper we show several results obtained by combining the use of stable distributions with pseudorandom generators for bounded space. In particular: ffl we show how to maintain (using only O(log n=ffl 2 ) words of storage) a sketch C(p) of a point p 2 l n 1 under dynamic updates of its coo ..."
Abstract

Cited by 324 (13 self)
 Add to MetaCart
In this paper we show several results obtained by combining the use of stable distributions with pseudorandom generators for bounded space. In particular: ffl we show how to maintain (using only O(log n=ffl 2 ) words of storage) a sketch C(p) of a point p 2 l n 1 under dynamic updates of its coordinates, such that given sketches C(p) and C(q) one can estimate jp \Gamma qj 1 up to a factor of (1 + ffl) with large probability. This solves the main open problem of [10]. ffl we obtain another sketch function C 0 which maps l n 1 into a normed space l m 1 (as opposed to C), such that m = m(n) is much smaller than n; to our knowledge this is the first dimensionality reduction lemma for l 1 norm ffl we give an explicit embedding of l n 2 into l n O(log n) 1 with distortion (1 + 1=n \Theta(1) ) and a nonconstructive embedding of l n 2 into l O(n) 1 with distortion (1 + ffl) such that the embedding can be represented using only O(n log 2 n) bits (as opposed to at least...
Nonasymptotic theory of random matrices: extreme singular values
 PROCEEDINGS OF THE INTERNATIONAL CONGRESS OF MATHEMATICIANS
, 2010
"... ..."
Uncertainty principles and vector quantization
"... An abstract form of the Uncertainty Principle set forth by Candes and Tao has found remarkable applications in the sparse approximation theory. This paper demonstates a new connection between the Uncertainty Principle and the vector quantization theory. We show that for frames in C n that satisfy ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
An abstract form of the Uncertainty Principle set forth by Candes and Tao has found remarkable applications in the sparse approximation theory. This paper demonstates a new connection between the Uncertainty Principle and the vector quantization theory. We show that for frames in C n that satisfy the Uncertainty Principle, one can quickly convert every frame representation into a more regular Kashin’s representation, whose coefficients all have the same magnitude O(1 /√n). Information tends to spread evenly among these coefficients. As a consequence, Kashin’s representations have a great power for reduction of errors in their coefficients. In particular, scalar quantization of Kashin’s representations yields robust vector quantizers in C n.
Random Spaces Generated by Vertices of the Cube
"... Let E be the discrete cube in R . For every N n we consider the class of convex bodies KN = cofx1 ; : : : ; xN g which are generated by N random points x1 ; : : : ; xN chosen independently and uniformly from E 2 . We show that if n n0 and N n(log n) then, for a random KN , the inradius, ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Let E be the discrete cube in R . For every N n we consider the class of convex bodies KN = cofx1 ; : : : ; xN g which are generated by N random points x1 ; : : : ; xN chosen independently and uniformly from E 2 . We show that if n n0 and N n(log n) then, for a random KN , the inradius, the volume radius, the mean width and the size of the maximal inscribed cube can be determined up to an absolute constant as functions of n and N . This geometric description of KN leads to sharp estimates for several asymptotic parameters of the corresponding ndimensional normed space XN .
Geometric applications of Chernofftype estimates and a ZigZag approximation for balls
"... Abstract. In this paper we show that the euclidean ball of radius 1 in Rn can be approximated up to ε>0, in the Hausdorff distance, by a set defined by N = C(ε)n linear inequalities. We call this set a ZigZag set, and it is defined to be all points in space satisfying 50 % or more of the inequali ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we show that the euclidean ball of radius 1 in Rn can be approximated up to ε>0, in the Hausdorff distance, by a set defined by N = C(ε)n linear inequalities. We call this set a ZigZag set, and it is defined to be all points in space satisfying 50 % or more of the inequalities. The constant we get is C(ε) =C ln(1/ε)/ε2,whereCis some universal constant. This should be compared with the result of Barron and Cheang (2000), who obtained N = Cn2 /ε2. The main ingredient in our proof is the use of Chernoff’s inequality in a geometric context. After proving the theorem, we describe several other results which can be obtained using similar methods. The aim of this paper is to demonstrate how the wellknown Chernoff estimates from probability theory can be used in a geometric context for a very broad spectrum of problems, and how they lead to new and improved results. We will briefly describe Chernoff bounds, and then present the motivation for, and the proof of, the following theorem:
Special orthogonal splittings of L 2k 1
 Israel J. Math
"... We show that for each positive integer k there is a k×k matrix B with ±1 entries such that putting E to be the span of the rows of the k × 2k matrix [ √ kIk,B], then E,E ⊥ is a Kashin splitting: The L2k 1 and the L2k 2 are universally equivalent on both E and E ⊥. Moreover, the probability that a r ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We show that for each positive integer k there is a k×k matrix B with ±1 entries such that putting E to be the span of the rows of the k × 2k matrix [ √ kIk,B], then E,E ⊥ is a Kashin splitting: The L2k 1 and the L2k 2 are universally equivalent on both E and E ⊥. Moreover, the probability that a random ±1 matrix satisfies the above is exponentially close to 1. 1
unknown title
, 2005
"... www.elsevier.com/locate/jfa Logarithmic reduction of the level of randomness in some probabilistic geometric constructions ✩ S. ArtsteinAvidan a,b, ∗ , V.D. Milman c ..."
Abstract
 Add to MetaCart
(Show Context)
www.elsevier.com/locate/jfa Logarithmic reduction of the level of randomness in some probabilistic geometric constructions ✩ S. ArtsteinAvidan a,b, ∗ , V.D. Milman c