Results 1 
7 of
7
Wavelet shrinkage: asymptopia
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators bein ..."
Abstract

Cited by 297 (36 self)
 Add to MetaCart
Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons { sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coe cients towards the origin by an amount p p 2 log(n) = n. The method is di erent from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions { e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives { and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is amuch broader nearoptimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity.
Adaptive hypothesis testing using wavelets
 Annals of Statistics
, 1996
"... Let a function f be observed with a noise. We wish to test the null hypothesis that the function is identically zero, against a composite nonparametric alternative: functions from the alternative set are separated away from zero in an integral Ž e.g., L. 2 norm and also possess some smoothness prope ..."
Abstract

Cited by 91 (10 self)
 Add to MetaCart
Let a function f be observed with a noise. We wish to test the null hypothesis that the function is identically zero, against a composite nonparametric alternative: functions from the alternative set are separated away from zero in an integral Ž e.g., L. 2 norm and also possess some smoothness properties. The minimax rate of testing for this problem was evaluated in earlier papers by Ingster and by Lepski and Spokoiny under different kinds of smoothness assumptions. It was shown that both the optimal rate of testing and the structure of optimal Ž in rate. tests depend on smoothness parameters which are usually unknown in practical applications. In this paper the problem of adaptive Ž assumption free. testing is considered. It is shown that adaptive testing without loss of efficiency is impossible. An extra log logfactor is inessential but unavoidable payment for the adaptation. A simple adaptive test based on wavelet technique is constructed which is nearly minimax for a wide range of Besov classes. 1. Introduction. Suppose
Optimal pointwise adaptive methods in nonparametric estimation
 ANN. STATIST
, 1997
"... The problem of optimal adaptive estimation of a function at a given point from noisy data is considered. Two procedures are proved to be asymptotically optimal for different settings. First we study the problem of bandwidth selection for nonparametric pointwise kernel estimation with a given kernel. ..."
Abstract

Cited by 67 (11 self)
 Add to MetaCart
The problem of optimal adaptive estimation of a function at a given point from noisy data is considered. Two procedures are proved to be asymptotically optimal for different settings. First we study the problem of bandwidth selection for nonparametric pointwise kernel estimation with a given kernel. We propose a bandwidth selection procedure and prove its optimality in the asymptotic sense. Moreover, this optimality is stated not only among kernel estimators with a variable bandwidth. The resulting estimator is asymptotically optimal among all feasible estimators. The important feature of this procedure is that it is fully adaptive and it “works” for a very wide class of functions obeying a mild regularity restriction. With it the attainable accuracy of estimation depends on the function itself and is expressed in terms of the “ideal adaptive bandwidth” corresponding to this function and a given kernel. The second procedure can be considered as a specialization of the first one under the qualitative assumption that the function to be estimated belongs to some Hölder class ��β � L � with unknown parameters β � L. This assumption allows us to choose a family of kernels in an optimal way and the resulting procedure appears to be asymptotically optimal in the adaptive sense in any range of adaptation with β ≤ 2.
Neoclassical minimax problems, thresholding and adaptive function estimation Bernoulli
, 1996
"... 2 We study the problem of estimating from data Y N ( ; ) under squarederror loss. We de ne three new scalar minimax problems in which the risk is weighted by the size of. Simple thresholding gives asymptotically minimax estimates of all three problems. We indicate the relationships of the new probl ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
2 We study the problem of estimating from data Y N ( ; ) under squarederror loss. We de ne three new scalar minimax problems in which the risk is weighted by the size of. Simple thresholding gives asymptotically minimax estimates of all three problems. We indicate the relationships of the new problems to each other and to two other neoclassical problems: the problems of the bounded normal mean and of the riskconstrained normal mean. Via the wavelet transform, these results have implications for adaptive function estimation, to: (1) estimating functions of unknown type and degree of smoothness in a global ` 2 norm; (2) estimating a function of unknown degree of local Holder smoothness at a xed point. In setting (2), the scalar minimax results imply: (a) that it is not possible to fully adapt to unknown degree of smoothness { adaptation imposes a performance cost; and (b) that simple thresholding of the empirical wavelet transform gives an estimate of a function at a xed point which is, to within constants, optimally adaptive to unknown degree of smoothness.
On Estimating A Dynamic Function Of A Stochastic System With Averaging
, 1997
"... We consider a twoscaled diffusion system, when drift and diffusion parameters of the "slow" component are contaminated by the "fast" unobserved component. The goal is to estimate the dynamic function which is defined by averaging the drift coefficient of the "slow" com ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
We consider a twoscaled diffusion system, when drift and diffusion parameters of the "slow" component are contaminated by the "fast" unobserved component. The goal is to estimate the dynamic function which is defined by averaging the drift coefficient of the "slow" component w.r.t. the stationary distribution of the "fast" one. We apply a locally linear smoother with a datadriven bandwidth choice. The procedure is fully adaptive and nearly optimal up to a log log factor.
Deviation Bounds for Wavelet Shrinkage
, 2001
"... We analyse the wavelet shrinkage algorithm of Donoho and Johnstone in order to assess the quality of the reconstruction of a signal obtained from noisy samples. We prove deviation bounds for the maximum of the squares of the error, and for the average of the squares of the error, under the assumptio ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We analyse the wavelet shrinkage algorithm of Donoho and Johnstone in order to assess the quality of the reconstruction of a signal obtained from noisy samples. We prove deviation bounds for the maximum of the squares of the error, and for the average of the squares of the error, under the assumption that the signal comes from a Hölder class, and the noise samples are independent, of 0 mean, and bounded. Our main technique is Talgrand’s isoperimetric theorem. Our bounds refine the known expectations for the average of the squares of the error. Second author’s research supported in part by NSF grant DMS9970471
REFERENCES
"... element of h is equal to H for � CI � �. Consequently, — � aH w.p. I for � CI � �. Therefore, we conclude that ” ‡ � is block diagonal. The upper left �2 � block is a diagonal matrix, with diagonal elements ™a � ��; the lower right block is arbitrary, since ˜ � a — � aH regardless of the choice of ..."
Abstract
 Add to MetaCart
(Show Context)
element of h is equal to H for � CI � �. Consequently, — � aH w.p. I for � CI � �. Therefore, we conclude that ” ‡ � is block diagonal. The upper left �2 � block is a diagonal matrix, with diagonal elements ™a � ��; the lower right block is arbitrary, since ˜ � a — � aH regardless of the choice of this block. We, therefore, choose ” ‡ � to be a diagonal matrix with the first � diagonal elements equal to ™a � � � and