Results 1  10
of
25
Asymptotic equivalence of spectral density estimation and Gaussian white noise
, 2009
"... We consider the statistical experiment given by a sample y(1),...,y(n) of a stationary Gaussian process with an unknown smooth spectral density f. Asymptotic equivalence, in the sense of Le Cam’s deficiency ∆distance, to two Gaussian experiments with simpler structure is established. The first one ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
We consider the statistical experiment given by a sample y(1),...,y(n) of a stationary Gaussian process with an unknown smooth spectral density f. Asymptotic equivalence, in the sense of Le Cam’s deficiency ∆distance, to two Gaussian experiments with simpler structure is established. The first one is given by independent zero mean Gaussians with variance approximately f(ωi) where ωi is a uniform grid of points in (−π, π) (nonparametric Gaussian scale regression). This approximation is closely related to wellknown asymptotic independence results for the periodogram and corresponding inference methods. The second asymptotic equivalence is to a Gaussian white noise model where the drift function is the logspectral density. This represents the step from a Gaussian scale model to a location model, and also has a counterpart in established inference methods, i.e. logperiodogram regression. The problem of simple explicit equivalence maps (Markov kernels), allowing to directly carry over inference, appears in this context but is not solved here.
Estimating the intensity of a random measure by histogram type estimators
, 2006
"... The purpose of this paper is to estimate the intensity of some random measure N on a set X by a piecewise constant function on a finite partition of X. Given a (possibly large) family M of candidate partitions, we build a piecewise constant estimator (histogram) on each of them and then use the data ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
The purpose of this paper is to estimate the intensity of some random measure N on a set X by a piecewise constant function on a finite partition of X. Given a (possibly large) family M of candidate partitions, we build a piecewise constant estimator (histogram) on each of them and then use the data to select one estimator in the family. Choosing the square of a Hellingertype distance as our loss function, we show that each estimator built on a given partition satisfies an analogue of the classical squared bias plus variance risk bound. Moreover, the selection procedure leads to a final estimator satisfying some oracletype inequality, with, as usual, a possible loss corresponding to the complexity of the family M. When this complexity is not too high, the selected estimator has a risk bounded, up to a universal constant, by the smallest risk bound obtained for the estimators in the family. For suitable choices of the family of partitions, we deduce uniform risk bounds over various classes of intensities. Our approach applies to the estimation of the intensity of an inhomogenous Poisson process, among other counting processes, or the estimation of the mean of a random vector with nonnegative components.
The root–unroot algorithm for density estimation as implemented via wavelet block thresholding
, 2010
"... ..."
ADAPTIVE NONPARAMETRIC CONFIDENCE SETS
, 2006
"... We construct honest confidence regions for a Hilbert spacevalued parameter in various statistical models. The confidence sets can be centered at arbitrary adaptive estimators, and have diameter which adapts optimally to a given selection of models. The latter adaptation is necessarily limited in sc ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
We construct honest confidence regions for a Hilbert spacevalued parameter in various statistical models. The confidence sets can be centered at arbitrary adaptive estimators, and have diameter which adapts optimally to a given selection of models. The latter adaptation is necessarily limited in scope. We review the notion of adaptive confidence regions, and relate the optimal rates of the diameter of adaptive confidence regions to the minimax rates for testing and estimation. Applications include the finite normal mean model, the white noise model, density estimation and regression with random design.
Tight conditions for consistent variable selection in high dimensional nonparametric regression
"... ..."
(Show Context)
Tusnády inequality revisited
 Ann. Statist
, 2004
"... Tusnády’s inequality is the key ingredient in the KMT/Hungarian coupling of the empirical distribution function with a Brownian bridge. We present an elementary proof of a result that sharpens the Tusnády inequality, modulo constants. Our method uses the beta integral representation of Binomial tail ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Tusnády’s inequality is the key ingredient in the KMT/Hungarian coupling of the empirical distribution function with a Brownian bridge. We present an elementary proof of a result that sharpens the Tusnády inequality, modulo constants. Our method uses the beta integral representation of Binomial tails, simple Taylor expansion and some novel bounds for the ratios of normal tail probabilities. 1. Introduction. In
Asymptotic equivalence of nonparametric autoregression and nonparametric
, 2006
"... It is proved that nonparametric autoregression is asymptotically equivalent in the sense of Le Cam’s deficiency distance to nonparametric regression with random design as well as with regular nonrandom design. 1. Introduction. We assume that observations X0,...,Xn from a stationary autoregressive pr ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
It is proved that nonparametric autoregression is asymptotically equivalent in the sense of Le Cam’s deficiency distance to nonparametric regression with random design as well as with regular nonrandom design. 1. Introduction. We assume that observations X0,...,Xn from a stationary autoregressive process (Xi)i=0,...,n are available which obey the model equation (1) Xi = f(Xi−1) + εi,
Adaptive estimation of and oracle inequalities for probability densities
, 2004
"... The theory of adaptive estimation and oracle inequalities for the case of Gaussianshift–finiteinterval experiments has made significant progress in recent years. In particular, sharpminimax adaptive estimators and exact exponentialtype oracle inequalities have been suggested for a vast set of fu ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
The theory of adaptive estimation and oracle inequalities for the case of Gaussianshift–finiteinterval experiments has made significant progress in recent years. In particular, sharpminimax adaptive estimators and exact exponentialtype oracle inequalities have been suggested for a vast set of functions including analytic and Sobolev with any positive index as well as for Efromovich–Pinsker and Stein blockwiseshrinkage estimators. Is it possible to obtain similar results for a more interesting applied problem of density estimation and/or the dual problem of characteristic function estimation? The answer is “yes. ” In particular, the obtained results include exact exponentialtype oracle inequalities which allow to consider, for the first time in the literature, a simultaneous sharpminimax estimation of Sobolev densities with any positive index (not necessarily larger than 1/2), infinitely differentiable densities (including analytic, entire and stable), as well as of not absolutely integrable characteristic functions. The same adaptive estimator is also rate minimax over a familiar class of distributions with bounded spectrum where the density and the characteristic function can be estimated with the parametric rate. 1. Introduction. Univariate
A note on quantile coupling inequalities and their applications
 Submitted. Available from www.stat.yale.edu/˜hz68
, 2006
"... A relationship between the large deviation and quantile coupling is studied. We apply this relationship to the coupling of the sum of n i.i.d. symmetric random variables with a normal random variable, improving the classical quantile coupling inequalities (the key part in the celebrated KMT construc ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
A relationship between the large deviation and quantile coupling is studied. We apply this relationship to the coupling of the sum of n i.i.d. symmetric random variables with a normal random variable, improving the classical quantile coupling inequalities (the key part in the celebrated KMT constructions) with a rate 1 = p n for random variables with continuous distributions, or the same rate modulo constants for the general case. Applications to the asymptotic equivalence theory and nonparametric function estimation are discussed.
A COMPLEMENT TO LE CAM’S THEOREM
, 708
"... This paper examines asymptotic equivalence in the sense of Le Cam between density estimation experiments and the accompanying Poisson experiments. The significance of asymptotic equivalence is that all asymptotically optimal statistical procedures can be carried over from one experiment to the other ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
This paper examines asymptotic equivalence in the sense of Le Cam between density estimation experiments and the accompanying Poisson experiments. The significance of asymptotic equivalence is that all asymptotically optimal statistical procedures can be carried over from one experiment to the other. The equivalence given here is established under a weak assumption on the parameter space F. In particular, a sharp Besov smoothness condition is given on F which is sufficient for Poissonization, namely, if F is in a Besov ball B α p,q(M) with αp> 1/2. Examples show Poissonization is not possible whenever αp < 1/2. In addition, asymptotic equivalence of the density estimation model and the accompanying Poisson experiment is established for all compact subsets of C([0,1] m), a condition which includes all Hölder balls with smoothness α> 0.