Results 1  10
of
47
DeNoising By SoftThresholding
, 1992
"... Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti)+ zi, iid i =0;:::;n 1, ti = i=n, zi N(0; 1). The reconstruction fn ^ is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an a ..."
Abstract

Cited by 1249 (14 self)
 Add to MetaCart
Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti)+ zi, iid i =0;:::;n 1, ti = i=n, zi N(0; 1). The reconstruction fn ^ is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability ^ fn is at least as smooth as f, in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.
Wavelet shrinkage: asymptopia
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators bein ..."
Abstract

Cited by 297 (36 self)
 Add to MetaCart
(Show Context)
Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons { sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coe cients towards the origin by an amount p p 2 log(n) = n. The method is di erent from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions { e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives { and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is amuch broader nearoptimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity.
Unconditional bases are optimal bases for data compression and for statistical estimation
 Applied and Computational Harmonic Analysis
, 1993
"... An orthogonal basis of L 2 which is also an unconditional basis of a functional space F is a kind of optimal basis for compressing, estimating, and recovering functions in F. Simple thresholding operations, applied in the unconditional basis, work essentially better for compressing, estimating, and ..."
Abstract

Cited by 172 (21 self)
 Add to MetaCart
(Show Context)
An orthogonal basis of L 2 which is also an unconditional basis of a functional space F is a kind of optimal basis for compressing, estimating, and recovering functions in F. Simple thresholding operations, applied in the unconditional basis, work essentially better for compressing, estimating, and recovering than they do in any other orthogonal basis. In fact, simple thresholding in an unconditional basis works essentially better for recovery and estimation than other methods, period. (Performance is measured in an asymptotic minimax sense.) As an application, we formalize and prove Mallat's Heuristic, which says that wavelet bases are optimal for representing functions containing singularities, when there may be an arbitrary number of singularities, arbitrarily distributed.
The mathematics of learning: Dealing with data
 Notices of the American Mathematical Society
, 2003
"... Draft for the Notices of the AMS Learning is key to developing systems tailored to a broad range of data analysis and information extraction tasks. We outline the mathematical foundations of learning theory and describe a key algorithm of it. 1 ..."
Abstract

Cited by 167 (18 self)
 Add to MetaCart
(Show Context)
Draft for the Notices of the AMS Learning is key to developing systems tailored to a broad range of data analysis and information extraction tasks. We outline the mathematical foundations of learning theory and describe a key algorithm of it. 1
Explicit Cost Bounds of Algorithms for Multivariate Tensor Product Problems
 J. Complexity
, 1994
"... We study multivariate tensor product problems in the worst case and average case settings. They are defined on functions of d variables. For arbitrary d, we provide explicit upper bounds on the costs of algorithms which compute an "approximation to the solution. The cost bounds are of the for ..."
Abstract

Cited by 69 (10 self)
 Add to MetaCart
We study multivariate tensor product problems in the worst case and average case settings. They are defined on functions of d variables. For arbitrary d, we provide explicit upper bounds on the costs of algorithms which compute an "approximation to the solution. The cost bounds are of the form (c(d) + 2) fi 1 ` fi 2 + fi 3 ln 1=" d \Gamma 1 ' fi 4 (d\Gamma1) ` 1 " ' fi 5 : Here c(d) is the cost of one function evaluation (or one linear functional evaluation), and fi i 's do not depend on d; they are determined by the properties of the problem for d = 1. For certain tensor product problems, these cost bounds do not exceed c(d) K " \Gammap for some numbers K and p, both independent of d. We apply these general estimates to certain integration and approximation problems in the worst and average case settings. We also obtain an upper bound, which is independent of d, for the number, n("; d), of points for which discrepancy (with unequal weights) is at most ", n("; d) 7:26 ...
Random Approximation in Numerical Analysis
 Proceedings of the Conference &quot;Functional Analysis&quot; Essen
, 1994
"... this paper is twofold. In the first part (sections 2  6) I want to give a survey on recent developments of Monte Carlo complexity. This will include techniques to derive sharp lower bounds as well as the construction of concrete numerical methods which attain these optimal bounds. The field covered ..."
Abstract

Cited by 33 (22 self)
 Add to MetaCart
(Show Context)
this paper is twofold. In the first part (sections 2  6) I want to give a survey on recent developments of Monte Carlo complexity. This will include techniques to derive sharp lower bounds as well as the construction of concrete numerical methods which attain these optimal bounds. The field covered here lies at the frontiers of several disciplines, among them theoretical computer science, numerical analysis, probability theory, approximation theory and to a large extent functional analysis. I want to stress the latter aspect and show how new techniques from Banach space and operator theory can be applied to Monte Carlo complexity. In the second part I want to present new results  the solution to a problem concering the Monte Carlo complexity of Fredholm integral equations. This will demonstrate in detail the general approach outlined in part one. We develop a new, fast algorithm  it is a combination of Monte Carlo methods with the Galerkin technique, an approach which seems to be new to this field. The basis functions used for the Galerkin discretization are orthogonal splines of minimal smoothness. They lead to an implementable procedure of minimal computational cost. The paper is organized as follows. In section 2, the main notions of informationbased complexity theory are explained. We cover both the deterministic and the stochastic setting in detail, also for the sake of later comparisons. Some relations to snumber theory are presented in section 3. The role of the average case in proofs of lower bounds for Monte Carlo methods is explained in Section 4. In the following three sections, we analyse the complexity of basic numerical problems: Section 5 deals with numerical integration and contains classical results on the complexity of Monte Carlo quadrature, toge...
Comparison of Radial Basis Function Interpolants
 In Multivariate Approximation. From CAGD to Wavelets
, 1995
"... This paper compares radial basis function interpolants on different spaces. The spaces are generated by other radial basis functions, and comparison is done via an explicit representation of the norm of the error functional. The results pose some new questions for further research. x1. Introduction ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
(Show Context)
This paper compares radial basis function interpolants on different spaces. The spaces are generated by other radial basis functions, and comparison is done via an explicit representation of the norm of the error functional. The results pose some new questions for further research. x1. Introduction We consider interpolation of realvalued functions f defined on a set \Omega ` IR d ; d 1. These functions are evaluated on a set X := fx 1 ; : : : ; xNX g of NX 1 pairwise distinct points x 1 ; : : : ; xNX in \Omega\Gamma If N 2; d 2 and\Omega ` IR d are given with\Omega containing at least an interior point, it is well known that there is no Ndimensional space of continuous functions on\Omega that contains a unique interpolant for every f and every set X = fx 1 ; : : : ; xNX g ae\Omega ` IR d consisting of N = NX data points. Thus the family of interpolants must necessarily depend on X. This can easily be achieved by using translates \Phi(x \Gamma x j ) of a single continu...
On The Structure Of Function Spaces In Optimal Recovery Of Point Functionals For ENOSchemes By Radial Basis Functions
 Numer. Math
, 1996
"... . Radial basis functions are used in the recovery step of finite volume methods for the numerical solution of conservation laws. Being conditionally positive definite such functions generate optimal recovery splines in the sense of Micchelli and Rivlin in associated native spaces. We analyse the ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
. Radial basis functions are used in the recovery step of finite volume methods for the numerical solution of conservation laws. Being conditionally positive definite such functions generate optimal recovery splines in the sense of Micchelli and Rivlin in associated native spaces. We analyse the solvability to the recovery problem of point functionals from cell average values with radial basis functions. Furthermore, we characterise the corresponding native function spaces and provide error estimates of the recovery scheme. Finally, we explicitly list the native spaces to a selection of radial basis functions, thin plate splines included, before we provide some numerical examples of our method. Contents 1. Introduction 2 2. Finite volume approximations 4 2.1. The governing equations 2.2. Finite volume approximations on triangulations 2.3. Node sets and ENO methods 3. Recovery splines 10 3.1. Radial recovery 3.2. Wellposedness of the recovery problem 4. Error estimates a...
Average Case Complexity Of Linear Multivariate Problems Part I: Theory
 APPLICATIONS; J. COMPLEXITY
, 1991
"... We study the average case complexity of linear multivariate problems, that is, the approximation of continuous linear operators on functions of d variables. The function spaces are equipped with Gaussian measures. We consider two classes of information. The first class std consists of function va ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We study the average case complexity of linear multivariate problems, that is, the approximation of continuous linear operators on functions of d variables. The function spaces are equipped with Gaussian measures. We consider two classes of information. The first class std consists of function values, and the second class all consists of all continuous linear functionals. Tractability of a linear multivariate problem means that the average case complexity of computing an "approximation is O ((1=") p ) with p independent of d. The smallest such p is called the exponent of the problem. Under mild assumptions, we prove that tractability in all is equivalent to tractability in std , and that the difference of the exponents is at most 2. The proof of this result is not constructive. We provide a simple condition to check tractability in all . We also address the issue how to construct optimal (or nearly optimal) sample points for linear multivariate problems. We use rela...
Analysis And Design Of MinimaxOptimal Interpolators
 IEEE Trans. Signal Proc
, 1998
"... We consider a class of interpolation algorithms, including the leastsquares optimal Yen interpolator, and we derive a closedform expression for the interpolation error for interpolators of this type. The error depends on the eigenvalue distribution of a matrix which is specified for each set of sa ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
We consider a class of interpolation algorithms, including the leastsquares optimal Yen interpolator, and we derive a closedform expression for the interpolation error for interpolators of this type. The error depends on the eigenvalue distribution of a matrix which is specified for each set of sampling points. The error expression can be used to prove that the Yen interpolator is optimal. The implementation of the Yen algorithm suffers from numerical illconditioning, forcing the use of a regularized, approximate solution. We suggest a new, approximate solution, consisting of a sinckernel interpolator with specially chosen weighting coefficients. The newly designed sinckernel interpolator is compared with the usual sinc interpolator using Jacobian (area) weighting, through numerical simulations. We show that the sinc interpolator with Jacobian weighting works well only when the sampling is nearly uniform. The newly designed sinckernel interpolator is shown to perform better than ...