Results 1  10
of
34
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Nonlinear Methods of Approximation
, 2002
"... Our main interest in this paper is nonlinear approximation. The basic idea behind nonlinear approximation is that the elements used in the approximation do not come from a xed linear space but are allowed to depend on the function being approximated. While the scope of this paper is mostly theoretic ..."
Abstract

Cited by 71 (9 self)
 Add to MetaCart
Our main interest in this paper is nonlinear approximation. The basic idea behind nonlinear approximation is that the elements used in the approximation do not come from a xed linear space but are allowed to depend on the function being approximated. While the scope of this paper is mostly theoretical, we should note that this form of approximation appears in many numerical applications such as adaptive PDE solvers, compression of images and signals, statistical classication, and so on. The standard problem in this regard is the problem of mterm approximation where one xes a basis and looks to approximate a target function by a linear combination of m terms of the basis. When the basis is a wavelet basis or a basis of other waveforms, then this type of approximation is the starting point for compression algorithms. We are interested in the quantitative aspects of this type of approximation. Namely, we want to understand the properties (usually smoothness) of the function which govern its rate of approximation in some given norm (or metric). We are also interested in stable algorithms for nding good or near best approximations using m terms. Some of our earlier work has introduced and analyzed such algorithms. More recently, there has emerged another more complicated form of nonlinear approximation which we call highly nonlinear approximation. It takes many forms but has the basic ingredient that a basis is replaced by a larger system of functions that is usually redundant. Some types of approximation that fall into this general category are mathematical frames, adaptive pursuit (or greedy algorithms) and adaptive basis selection. Redundancy on the one hand oers much promise for greater eciency in terms of approximation rate, but on the other hand gives rise to h...
Metric entropy of convex hulls in Banach spaces
 J. London Math. Soc
, 2000
"... The paper presents diverse methods for estimating the covering number of a precompact subset of a Banach space when the entropy of the set of its extremal points is already known. In the case of a Hilbert space, the Gelfand diameters of the subset are also estimated. The Krein–Milman theorem is a po ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
(Show Context)
The paper presents diverse methods for estimating the covering number of a precompact subset of a Banach space when the entropy of the set of its extremal points is already known. In the case of a Hilbert space, the Gelfand diameters of the subset are also estimated. The Krein–Milman theorem is a powerful tool in analysis. The aim of this paper is to quantify this theorem in terms of entropy numbers. More precisely, if we have information about the entropy of a precompact set in a Banach or a Hilbert space, then what can be said about the entropy of its convex hull? We give sharp estimates
Entropy and approximation numbers of embeddings of function spaces with Muckenhoupt weights
 I. Rev. Mat. Complut
"... We study compact embeddings for weighted spaces of Besov and TriebelLizorkin type where the weight belongs to some Muckenhoupt Ap class. For weights of purely polynomial growth, both near some singular point and at infinity, we obtain sharp asymptotic estimates for the entropy numbers and approxima ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
We study compact embeddings for weighted spaces of Besov and TriebelLizorkin type where the weight belongs to some Muckenhoupt Ap class. For weights of purely polynomial growth, both near some singular point and at infinity, we obtain sharp asymptotic estimates for the entropy numbers and approximation numbers of this embedding. The main tool is a discretization in terms of wavelet bases. Key words: wavelet bases, Muckenhoupt weighted function spaces, compact embeddings, entropy numbers, approximation numbers.
Entropy numbers in weighted function spaces and eigenvalue distributions of some degenerate pseudodifferential operators II
, 1994
"... This paper is the continuation of [17]. We investigate mapping and spectral properties of pseudodifferential operators of type \Psi 1;fl with 2 IR and 0 fl 1 in the weighted function spaces B s p;q (IR n ; w(x)) and F s p;q (IR n ; w(x)) treated in [17]. Furthermore we study the distribu ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
This paper is the continuation of [17]. We investigate mapping and spectral properties of pseudodifferential operators of type \Psi 1;fl with 2 IR and 0 fl 1 in the weighted function spaces B s p;q (IR n ; w(x)) and F s p;q (IR n ; w(x)) treated in [17]. Furthermore we study the distribution of eigenvalues and the behaviour of corresponding root spaces for degenerate pseudodifferential operators preferably of type b2 (x)b(x; D)b1(x), where b1 (x) and b2(x) are appropriate functions and b(x; D) 2 \Psi 1;fl . Finally, on the basis of the BirmanSchwinger principle, we deal with the "negative spectrum" (bound states) of related symmetric operators in L2 .
The Gelfand widths of ℓpballs for 0 < p ≤ 1
 J. Complexity
"... We provide sharp lower and upper bounds for the Gelfand widths of ℓpballs in the Ndimensional ℓ N qspace for 0 < p ≤ 1 and p < q ≤ 2. Such estimates are highly relevant to the novel theory of compressive sensing, and our proofs rely on methods from this area. ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
(Show Context)
We provide sharp lower and upper bounds for the Gelfand widths of ℓpballs in the Ndimensional ℓ N qspace for 0 < p ≤ 1 and p < q ≤ 2. Such estimates are highly relevant to the novel theory of compressive sensing, and our proofs rely on methods from this area.
Small ball probabilities of fractional Brownian sheets via fractional integration operators
 J. Theor. Probab
"... We investigate the small ball problem for ddimensional fractional Brownian sheets by functional analytic methods. For this reason we show that integration operators of Riemann–Liouville and Weyl type are very close in the sense of their approximation properties, i.e., the Kolmogorov and entropy num ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
We investigate the small ball problem for ddimensional fractional Brownian sheets by functional analytic methods. For this reason we show that integration operators of Riemann–Liouville and Weyl type are very close in the sense of their approximation properties, i.e., the Kolmogorov and entropy numbers of their difference tend to zero exponentially. This allows us to carry over properties of the Weyl operator to the Riemann–Liouville one, leading to sharp small ball estimates for some fractional Brownian sheets. In particular, we extend Talagrand’s estimate for the 2dimensional Brownian sheet to the fractional case. When passing from dimension 1 to dimension d \ 2, we use a quite general estimate for the Kolmogorov numbers of the tensor products of linear operators.
Mathematical methods for supervised learning
 Found. Comput. Math
, 2004
"... In honor of Steve Smale’s 75th birthday with the warmest regards of the authors Let ρ be an unknown Borel measure defined on the space Z: = X × Y with X ⊂ IR d and Y = [−M,M]. Given a set z of m samples zi = (xi,yi) drawn according to ρ, the problem of estimating a regression function fρ using thes ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
(Show Context)
In honor of Steve Smale’s 75th birthday with the warmest regards of the authors Let ρ be an unknown Borel measure defined on the space Z: = X × Y with X ⊂ IR d and Y = [−M,M]. Given a set z of m samples zi = (xi,yi) drawn according to ρ, the problem of estimating a regression function fρ using these samples is considered. The main focus is to understand what is the rate of approximation, measured either in expectation or probability, that can be obtained under a given prior fρ ∈ Θ, i.e. under the assumption that fρ is in the set Θ, and what are possible algorithms for obtaining optimal or semioptimal (up to logarithms) results. The optimal rate of decay in terms of m is established for many priors given either in terms of smoothness of fρ or its rate of approximation measured in one of several ways. This optimal rate is determined by two types of results. Upper bounds are established using various tools in approximation such as entropy, widths, and linear and nonlinear approximation. Lower bounds are proved using KullbackLeibler information together with Fano inequalities and a certain type of entropy. A distinction is drawn between algorithms which employ knowledge of the prior in the construction of the estimator and those that do not. Algorithms of the second type which are universally optimal for a certain range of priors are given. 1