Results 1 
6 of
6
Wavelet shrinkage: asymptopia
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators bein ..."
Abstract

Cited by 297 (36 self)
 Add to MetaCart
(Show Context)
Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons { sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coe cients towards the origin by an amount p p 2 log(n) = n. The method is di erent from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions { e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives { and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is amuch broader nearoptimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity.
Variable kernel estimates: On the impossibility of tuning the parameters
 in: HighDimensional Probability II, (edited by
, 2000
"... ABSTRACT For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge ofthe density). In this paper, we pose the same prob ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
ABSTRACT For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge ofthe density). In this paper, we pose the same problem for variable bandwidth kernel estimates where the bandwidths are allowed to depend upon the location. We show in particular that for positive kernels on the real line, for any databased bandwidth, there exists a density for which the ratio of expected L1 error over optimal L1 error tends to infinity. Thus, the problem oftuning the variable bandwidth in an optimal manner is “too hard”. Moreover, from the class ofcounterexamples exhibited in the paper, it appears that placing conditions on the densities (monotonicity, convexity, smoothness) does not help. 1
Inequalities for a New DataBased Method for Selecting . . .
 IN M.L. PURI (EDITOR), FESTSCHRIFT IN HONOUR OF GEORGE ROUSSAS, VSP INTERNATIONAL SCIENCE PUBLISHERS
, 1998
"... We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provide explicit nonasymptotic densityfree inequalities that relate the L 1 error of the selected estimate with that of the best possible estimate, and study in parti ..."
Abstract
 Add to MetaCart
We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provide explicit nonasymptotic densityfree inequalities that relate the L 1 error of the selected estimate with that of the best possible estimate, and study in particular the connection between the richness of the class of density estimates and the performance bound. For example, our method allows one to pick the bandwidth and kernel order in the kernel estimate simultaneously and still assure that for all densities, the L 1 error of the corresponding kernel estimate is not larger than about three times the error of the estimate with the optimal smoothing factor and kernel plus a constant times p log n=n, where n is the sample size, and the constant only depends on the complexity of the family of kernels used in the estimate. Further applications include multivariate kernel estimates, transformed kernel estimates, and variable kernel estimates.
Adaptive estimation on anisotropic Hölder spaces Part II. Partially adaptive case
, 2006
"... In this paper, we consider a particular case of adaptation. Let us recall that, in the first paper “Fully case”, a large collecton of anisotropic Hölder spaces is fixed and the goal is to construct an adaptive estimator with respect to the absolutely unknown smoothness parameter. Here the problem is ..."
Abstract
 Add to MetaCart
In this paper, we consider a particular case of adaptation. Let us recall that, in the first paper “Fully case”, a large collecton of anisotropic Hölder spaces is fixed and the goal is to construct an adaptive estimator with respect to the absolutely unknown smoothness parameter. Here the problem is quite different: an additionnal information is known, the effective smoothness of the signal. We prove a minimax result which demonstrates that a knowledge of is type is useful because the rate of convergence is better than that obtained without knowledge of the effective smothness. Moreover we linked this problem with the maxiset theory.
Almost Sure Testability of Classes of Densities
"... . Let a class F of densities be given. We draw an i.i.d. sample from a density f whichmayor may not be in F . After every n, one must make a guess whether f 2For not. A class is almost surely testable if there exists such a testing sequence such that for any f,we make #nitely many errors almos ..."
Abstract
 Add to MetaCart
. Let a class F of densities be given. We draw an i.i.d. sample from a density f whichmayor may not be in F . After every n, one must make a guess whether f 2For not. A class is almost surely testable if there exists such a testing sequence such that for any f,we make #nitely many errors almost surely. In this paper, several results are given that allow one to decide whether a class is almost surely testable. For example, continuity and square integrability are not testable, but unimodality, logconcavity, and boundedness by a given constant are. Keywords and phrases. Density estimation,kernel estimate,convergence, testing,asymptotic optimality, minimax rate, minimum distance estimation, total boundedness. 1991 Mathematics Subject Classifications: Primary 62G05. Running Head: testing densities The #rst author's work was supported by NSERC Grant A3456 and byFCAR Grant 90ER0291. The second authors' work was supported by DIGES Grant PB960300. 1 Table of Contents 1.