Results 1  10
of
34
Wavelet shrinkage: asymptopia
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators bein ..."
Abstract

Cited by 297 (36 self)
 Add to MetaCart
(Show Context)
Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons { sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coe cients towards the origin by an amount p p 2 log(n) = n. The method is di erent from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions { e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives { and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is amuch broader nearoptimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity.
AlphaDivergence for Classification, Indexing and Retrieval
 UNIVERSITY OF MICHIGAN
, 2001
"... Motivated by Chernoff's bound on asymptotic probability of error we propose the alphadivergence measure and a surrogate, the alphaJensen difference, for feature classification, indexing and retrieval in image and other databases. The alpha ..."
Abstract

Cited by 52 (5 self)
 Add to MetaCart
Motivated by Chernoff's bound on asymptotic probability of error we propose the alphadivergence measure and a surrogate, the alphaJensen difference, for feature classification, indexing and retrieval in image and other databases. The alpha
Estimation and confidence sets for sparse normal mixtures
, 2006
"... Estimation and confidence sets for sparse normal mixtures ..."
Abstract

Cited by 35 (17 self)
 Add to MetaCart
(Show Context)
Estimation and confidence sets for sparse normal mixtures
Inverse Problems as Statistics
 INVERSE PROBLEMS
, 2002
"... What mathematicians, scientists, engineers, and statisticians mean by "inverse problem" differs. For a statistician, an inverse problem is an inference or estimation problem... ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
(Show Context)
What mathematicians, scientists, engineers, and statisticians mean by "inverse problem" differs. For a statistician, an inverse problem is an inference or estimation problem...
Root n Consistent Density Estimators for Sums of Independent Random Variables
"... The density of a sum of independent random variables can be estimated by the convolution of kernel estimators for the marginal densities. We show under mild conditions that the resulting estimator is n consistent and converges in distribution in the spaces C0 (R) and L1 to a centered Gaussian pro ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
The density of a sum of independent random variables can be estimated by the convolution of kernel estimators for the marginal densities. We show under mild conditions that the resulting estimator is n consistent and converges in distribution in the spaces C0 (R) and L1 to a centered Gaussian process.
Multivariate LogConcave Distributions as a Nearly Parametric Model
, 2009
"... In this paper we show that the family Pd of probability distributions on R d with logconcave densities satisfies a strong continuity condition. In particular, it turns out that weak convergence within this family entails (i) convergence in total variation distance, (ii) convergence of arbitrary mome ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
In this paper we show that the family Pd of probability distributions on R d with logconcave densities satisfies a strong continuity condition. In particular, it turns out that weak convergence within this family entails (i) convergence in total variation distance, (ii) convergence of arbitrary moments, and (iii) pointwise convergence of Laplace transforms. Hence the nonparametric model Pd has similar properties as parametric models such as, for instance, the family of all dvariate Gaussian distributions.
On the Testability of Identification in Some Nonparametric Models with Endogeneity
, 2013
"... This paper examines three distinct hypothesis testing problems that arise in the context of identification of some nonparametric models with endogeneity. The first hypothesis testing problem we study concerns testing necessary conditions for identification in some nonparametric models with endogenei ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
This paper examines three distinct hypothesis testing problems that arise in the context of identification of some nonparametric models with endogeneity. The first hypothesis testing problem we study concerns testing necessary conditions for identification in some nonparametric models with endogeneity involving mean independence restrictions. These conditions are typically referred to as completeness conditions. The second and third hypothesis testing problems we examine concern testing for identification directly in some nonparametric models with endogeneity involving quantile independence restrictions. For each of these hypothesis testing problems, we provide conditions under which any test will have power no greater than size against any alternative. In this sense, we conclude that no nontrivial tests for these hypothesis testing problems exist.
Application of Local Rank Tests to Nonparametric Regression
, 1999
"... . Let Y i = f(x i ) + E i (1 # i # n) with given covariates x 1 < x 2 < < x n , an unknown regression function f and independent random errors E i with median zero. It is shown how to apply several linear rank test statistics simultaneously in order to test monotonicity of f in vari ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
. Let Y i = f(x i ) + E i (1 # i # n) with given covariates x 1 < x 2 < < x n , an unknown regression function f and independent random errors E i with median zero. It is shown how to apply several linear rank test statistics simultaneously in order to test monotonicity of f in various regions and to identify its local extrema. Keywords and phrases. exponential inequality, linear rank statistic, modality, monotonicity, multiscale testing, quadratic complexity 1 1 Introduction Suppose that one observes (x 1 , Y 1 ), (x 2 , Y 2 ), . . . , (x n , Y n ), where x 1 < x 2 < < x n are given real numbers, and the Y i are independent random variables with continuous distribution functions G i () := IP{Y i # }. With G := ((x i , G i )) 1#i#n we call G increasing on an interval J # R if G i # st. G j whenever x i , x j # J and x i # x j . Here G i # st. G j means that G i is stochastically smaller than G j , that means, G i # G j pointwise. Analogously we call G decre...
Estimating the number of classes
 Annals of Statistics
, 2007
"... Estimating the unknown number of classes in a population has numerous important applications. In a Poisson mixture model, the problem is reduced to estimating the odds that a class is undetected in a sample. The discontinuity of the odds prevents the existence of locally unbiased and informative est ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Estimating the unknown number of classes in a population has numerous important applications. In a Poisson mixture model, the problem is reduced to estimating the odds that a class is undetected in a sample. The discontinuity of the odds prevents the existence of locally unbiased and informative estimators and restricts confidence intervals to be onesided. Confidence intervals for the number of classes are also necessarily onesided. A sequence of lower bounds to the odds is developed and used to define pseudo maximum likelihood estimators for the number of classes. 1. Introduction. The