Results 1  10
of
21
Wavelet shrinkage: asymptopia
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators bein ..."
Abstract

Cited by 297 (36 self)
 Add to MetaCart
Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons { sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coe cients towards the origin by an amount p p 2 log(n) = n. The method is di erent from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions { e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives { and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is amuch broader nearoptimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity.
Nearoptimal detection of geometric objects by fast multiscale methods
 IEEE Trans. Inform. Theory
, 2005
"... Abstract—We construct detectors for “geometric ” objects in noisy data. Examples include a detector for presence of a line segment of unknown length, position, and orientation in twodimensional image data with additive white Gaussian noise. We focus on the following two issues. i) The optimal detec ..."
Abstract

Cited by 43 (10 self)
 Add to MetaCart
(Show Context)
Abstract—We construct detectors for “geometric ” objects in noisy data. Examples include a detector for presence of a line segment of unknown length, position, and orientation in twodimensional image data with additive white Gaussian noise. We focus on the following two issues. i) The optimal detection threshold—i.e., the signal strength below which no method of detection can be successful for large dataset size. ii) The optimal computational complexity of a nearoptimal detector, i.e., the complexity required to detect signals slightly exceeding the detection threshold. We describe a general approach to such problems which covers several classes of geometrically defined signals; for example, with onedimensional data, signals having elevated mean on an interval, and, indimensional data, signals with elevated mean on a rectangle, a ball, or an ellipsoid. In all these problems, we show that a naive or straightforward approach leads to detector thresholds and algorithms which are asymptotically far away from optimal. At the same time, a multiscale geometric analysis of these classes of objects allows us to derive asymptotically optimal detection thresholds and fast algorithms for nearoptimal detectors. Index Terms—Beamlets, detecting hot spots, detecting line segments, Hough transform, image processing, maxima of Gaussian processes, multiscale geometric analysis, Radon transform. I.
On the Estimation of Quadratic Functionals
"... We discuss the difficulties of estimating quadratic functionals based on observations Y (t) from the white noise model Y (t) = Jf (u)du + cr W (t), t E [0,1], o where W (t) is a standard Wiener process on [0, 1]. The optimal rates of convergence (as cr> 0) for estimating quadratic functionals u ..."
Abstract

Cited by 41 (10 self)
 Add to MetaCart
We discuss the difficulties of estimating quadratic functionals based on observations Y (t) from the white noise model Y (t) = Jf (u)du + cr W (t), t E [0,1], o where W (t) is a standard Wiener process on [0, 1]. The optimal rates of convergence (as cr> 0) for estimating quadratic functionals under certain geometric constraints are 1 found. Specially, the optimal rates of estimating J[f (k)(x)f dx under hyperrectangular o constraints r = (J: Xj(f)::; CFP) and weighted lpbody constraints r p = (J: "Lj ' IXj(f)IP::; C) are computed explicitly, where Xj(f) is the jth Fourier1 Bessel coefficient of the unknown function f. We invent a new method for developing lower bounds based on testing two highly composite hypercubes, and address its advantages. The attainable lower bounds are found by applying the hardest Idimensional approach as well as the hypercube method. We demonstrate that for estimating regular quadratic functionals (Le., the functionals which can be estimated at rate 0 (cr 2», the difficulties of the estimation are captured by the hardest one dimensional subproblems and for estimating nonregular quadratic functionals (i.e. no 0 (cr1consistent estimator exists), the difficulties are captured at certain finite dimensional (the dimension goes to infinite as cr> 0) hypercube subproblems.
Sharp Adaptive Estimation of Quadratic Functionals
"... Estimation of a quadratic functional of a function observed in Gaussian white noise is considered. A datadependent method for choosing the amount of smoothing is given. It is shown that the method is asymptotically sharp adaptive simultanuosly for the "regular" and "irregular" c ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Estimation of a quadratic functional of a function observed in Gaussian white noise is considered. A datadependent method for choosing the amount of smoothing is given. It is shown that the method is asymptotically sharp adaptive simultanuosly for the "regular" and "irregular" case. The method is based on applying the Lepski method to choose between certain quadratic estimators. These quadratic estimators are given by the theory of optimal recovery and they have the same form as the estimators which are minimax optimal among quadratic estimators. AMS 1991 subject classi cations. Primary 62G07; secondary 62G20.
ON NONPARAMETRIC TESTS OF POSITIVITY/MONOTONICITY/CONVEXITY
, 2002
"... We consider the problem of estimating the distance from an unknown signal, observed in a whitenoise model, to convex cones of positive/monotone/convex functions. We show that, when the unknown function belongs to a Hölder class, the risk of estimating the Lrdistance, 1 ≤ r<∞, from the signal to ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
We consider the problem of estimating the distance from an unknown signal, observed in a whitenoise model, to convex cones of positive/monotone/convex functions. We show that, when the unknown function belongs to a Hölder class, the risk of estimating the Lrdistance, 1 ≤ r<∞, from the signal to a cone is essentially the same (up to a logarithmic factor) as that of estimating the signal itself. The same risk bounds hold for the test of positivity, monotonicity and convexity of the unknown signal. We also provide an estimate for the distance to the cone of positive functions for which risk is, by a logarithmic factor, smaller than that of the “plugin ” estimate.
Most TI discussion papers can be downloaded at
"... ABSTRACT. We propose and study a class of regression models, in which the mean function is specified parametrically as in the existing regression methods, but the residual distribution is modeled nonparametrically by a kernel estimator, without imposing any assumption on its distribution. This speci ..."
Abstract
 Add to MetaCart
ABSTRACT. We propose and study a class of regression models, in which the mean function is specified parametrically as in the existing regression methods, but the residual distribution is modeled nonparametrically by a kernel estimator, without imposing any assumption on its distribution. This specification is different from the existing semiparametric regression models. The asymptotic properties of such likelihood and the maximum likelihood estimate (MLE) under this semiparametric model are studied. We show that under some regularity conditions, the MLE under this model is consistent (as compared to the possibly pseudo consistency of the parameter estimation under the existing parametric regression model), and is asymptotically normal with rate √ n and efficient. The nonparametric pseudolikelihood ratio has the Wilks property as the true likelihood ratio does. Simulated examples are presented to evaluate the accuracy of the proposed semiparametric MLE method. Key words: information bound, kernel density estimator, maximum likelihood estimate, nonlinear regression, semiparametric model, Ustatistic, Wilks property. Running Heading: Semiparametric regression with kernel errors Corresponding
On Testing Positivity/Monotonicity/Convexity of nonparametric Signals
, 1999
"... The problem. The problem addressed in this paper is as follows: let a nonparametric signal f: [0, 1] → R be observed according to the standard “signal + white noise ” model, so that the observation is a realization of the Gaussian random process X f n(·) on [0, 1]: ..."
Abstract
 Add to MetaCart
(Show Context)
The problem. The problem addressed in this paper is as follows: let a nonparametric signal f: [0, 1] → R be observed according to the standard “signal + white noise ” model, so that the observation is a realization of the Gaussian random process X f n(·) on [0, 1]:
Quadratic functional estimation in inverse problems
, 2009
"... We consider in this paper a Gaussian sequence model of observations Yi, i ≥ 1 having mean (or signal) θi and variance σi which is growing polynomially like i γ, γ> 0. This model describes a large panel of inverse problems. We estimate the quadratic functional of the unknown signal ∑ i≥1 θ2 i when ..."
Abstract
 Add to MetaCart
(Show Context)
We consider in this paper a Gaussian sequence model of observations Yi, i ≥ 1 having mean (or signal) θi and variance σi which is growing polynomially like i γ, γ> 0. This model describes a large panel of inverse problems. We estimate the quadratic functional of the unknown signal ∑ i≥1 θ2 i when the signal belongs to ellipsoids of both finite smoothness functions (polynomial weights iα, α> 0) and infinite smoothness (exponential weights eβir, β> 0, 0 < r ≤ 2). We propose a Pinsker type projection estimator in each case and study its quadratic risk. When the signal is sufficiently smoother than the difficulty of the inverse problem (α> γ + 1/4 or in the case of exponential weights), we obtain the parametric rate and the efficiency constant associated to it. Moreover, we give upper bounds of the second order term in the risk and conjecture that they are asymptotically sharp minimax. When the signal is finitely smooth with α ≤ γ + 1/4, we compute non parametric upper bounds of the risk of and we presume also that the constant is asymptotically sharp.