Results 1  10
of
23
Wavelet shrinkage: asymptopia
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators bein ..."
Abstract

Cited by 297 (36 self)
 Add to MetaCart
Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons { sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coe cients towards the origin by an amount p p 2 log(n) = n. The method is di erent from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions { e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives { and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is amuch broader nearoptimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity.
Limit distribution theory for maximum likelihood estimation of a logconcave density
, 2008
"... We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a logconcave density, i.e. a density of the form f0 = expϕ0 where ϕ0 is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and ..."
Abstract

Cited by 24 (9 self)
 Add to MetaCart
We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a logconcave density, i.e. a density of the form f0 = expϕ0 where ϕ0 is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log–concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, ∞) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions
Theoretical properties of the logconcave maximum likelihood estimator of a multidimensional density
 Electron. J. Statist
, 2010
"... of a multidimensional density ..."
Asymptotic normality of the L_1 error of the Grenander estimator
 Ann. Statist
, 2000
"... In Groeneboom (1985, 1989) a jump process was introduced that can be used (among other things) to study the asymptotic properties of the Grenander estimator of a monotone density. In this paper we derive the asymptotic normality of a suitably rescaled version of the L 1 error of the Grenander estima ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
In Groeneboom (1985, 1989) a jump process was introduced that can be used (among other things) to study the asymptotic properties of the Grenander estimator of a monotone density. In this paper we derive the asymptotic normality of a suitably rescaled version of the L 1 error of the Grenander estimator, using properties of this jump process.
Estimation of a function under shape restrictions. Applications to reliability
 Ann. Statist
, 2005
"... This paper deals with a nonparametric shape respecting estimation method for Ushaped or unimodal functions. A general upper bound for the nonasymptotic L1risk of the estimator is given. The method is applied to the shape respecting estimation of several classical functions, among them typical inte ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
This paper deals with a nonparametric shape respecting estimation method for Ushaped or unimodal functions. A general upper bound for the nonasymptotic L1risk of the estimator is given. The method is applied to the shape respecting estimation of several classical functions, among them typical intensity functions encountered in the reliability field. In each case, we derive from our upper bound the spatially adaptive property of our estimator with respect to the L1metric: it approximately behaves as the best variable binwidth histogram of the function under estimation. 1. Introduction. In
Variable kernel estimates: On the impossibility of tuning the parameters
 in: HighDimensional Probability II, (edited by
, 2000
"... ABSTRACT For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge ofthe density). In this paper, we pose the same prob ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
ABSTRACT For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge ofthe density). In this paper, we pose the same problem for variable bandwidth kernel estimates where the bandwidths are allowed to depend upon the location. We show in particular that for positive kernels on the real line, for any databased bandwidth, there exists a density for which the ratio of expected L1 error over optimal L1 error tends to infinity. Thus, the problem oftuning the variable bandwidth in an optimal manner is “too hard”. Moreover, from the class ofcounterexamples exhibited in the paper, it appears that placing conditions on the densities (monotonicity, convexity, smoothness) does not help. 1
LEAST SQUARES ESTIMATORS OF THE MODE OF A UNIMODAL REGRESSION FUNCTION
"... In this paper, we consider nonparametric least squares estimators of the mode of an unknown unimodal regression function. We establish almost sure convergence of these estimators with nearly optimal convergence rates, under the assumption of the exponential tail for the error distributions. 1. Intro ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
In this paper, we consider nonparametric least squares estimators of the mode of an unknown unimodal regression function. We establish almost sure convergence of these estimators with nearly optimal convergence rates, under the assumption of the exponential tail for the error distributions. 1. Introduction. Consider
On the risk of estimates for block decreasing densities
 J. Mult. Anal
, 2003
"... Abstract. A density f = f(x1,..., x d) on [0, ∞) d is block decreasing if for each j ∈ {1,..., d}, it is a decreasing function of xj, when all other components are held fixed. Let us consider the class of all block decreasing densities on [0, 1] d bounded by B. We shall study the minimax risk over t ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. A density f = f(x1,..., x d) on [0, ∞) d is block decreasing if for each j ∈ {1,..., d}, it is a decreasing function of xj, when all other components are held fixed. Let us consider the class of all block decreasing densities on [0, 1] d bounded by B. We shall study the minimax risk over this class using n i.i.d. observations, the loss being measured by the L1 distance between the estimate and the true density. We prove that if S = log(1 + B), lower bounds for the risk are of the form C(S d /n) 1/(d+2) , where C is a function of d only. We also prove that a suitable histogram with unequal bin widths as well as a variable kernel estimate achieve the optimal multivariate rate. We present a procedure for choosing all parameters in the kernel estimate automatically without loosing the minimax optimality, even if B and the support of f are unknown.
Stabilization of flickerlike effects in image sequences through local contrast correction
 SIAM Journal on Imaging Sciences
"... Abstract. In this paper, we address the problem of the restoration of image sequences which have been affected by local intensity modifications (local contrast changes). Such artifacts can be encountered in particular in biological or archive film sequences, and are usually due to inconsistent expos ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we address the problem of the restoration of image sequences which have been affected by local intensity modifications (local contrast changes). Such artifacts can be encountered in particular in biological or archive film sequences, and are usually due to inconsistent exposures or sparse time sampling. In order to reduce such local artifacts, we introduce a local stabilization operator, called LStab, which acts as a time filtering on image patches and relies on a similarity measure which is robust to contrast changes. Thereby, this operator is able to take motion into account without resting on a sophisticated motion estimation procedure. The efficiency of the stabilization is shown on various sequences. The experimental results compares favorably with state of the art approaches. hal00407796, version 2 17 Dec 2009
Global risk bounds and adaptation in univariate convex regression
, 2013
"... We consider the problem of nonparametric estimation of a convex regression function φ0. We study global risk bounds and adaptation properties of the least squares estimator (LSE) of φ0. Under the natural squared error loss, we show that the risk of the LSE is bounded from above by n−4/5 upto a multi ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We consider the problem of nonparametric estimation of a convex regression function φ0. We study global risk bounds and adaptation properties of the least squares estimator (LSE) of φ0. Under the natural squared error loss, we show that the risk of the LSE is bounded from above by n−4/5 upto a multiplicative factor that is logarithmic in n. When φ0 is convex and piecewise affine with k knots, we establish adaptation of the LSE by showing that its risk is bounded from above by k5/4/n upto logarithmic multiplicative factors. On the other hand, when φ0 has curvature, we show that no estimator can have risk smaller than a constant multiple of n−4/5 in a very strong sense by proving a “local” minimax lower bound. We also study the case of model misspecification where we show that the LSE exhibits the same global behavior provided the loss is measured from the closest convex projection of the true regression function. In addition to the convex LSE, we also provide risk bounds for a natural sieved LSE. In the process of proving our results, we establish some new results on the covering numbers of classes of convex functions which are of independent interest. 1