Results 1  10
of
20
Ideal spatial adaptation by wavelet shrinkage
 Biometrika
, 1994
"... With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle o ers dramatic ad ..."
Abstract

Cited by 1251 (5 self)
 Add to MetaCart
With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle o ers dramatic advantages over traditional linear estimation by nonadaptive kernels � however, it is a priori unclear whether such performance can be obtained by a procedure relying on the data alone. We describe a new principle for spatiallyadaptive estimation: selective wavelet reconstruction. Weshowthatvariableknot spline ts and piecewisepolynomial ts, when equipped with an oracle to select the knots, are not dramatically more powerful than selective wavelet reconstruction with an oracle. We develop a practical spatially adaptive method, RiskShrink, which works by shrinkage of empirical wavelet coe cients. RiskShrink mimics the performance of an oracle for selective wavelet reconstruction as well as it is possible to do so. A new inequality inmultivariate normal decision theory which wecallthe oracle inequality shows that attained performance di ers from ideal performance by at most a factor 2logn, where n is the sample size. Moreover no estimator can give a better guarantee than this. Within the class of spatially adaptive procedures, RiskShrink is essentially optimal. Relying only on the data, it comes within a factor log 2 n of the performance of piecewise polynomial and variableknot spline methods equipped with an oracle. In contrast, it is unknown how or if piecewise polynomial methods could be made to function this well when denied access to an oracle and forced to rely on data alone.
Wavelet shrinkage: asymptopia
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators bein ..."
Abstract

Cited by 297 (36 self)
 Add to MetaCart
Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons { sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coe cients towards the origin by an amount p p 2 log(n) = n. The method is di erent from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions { e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives { and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is amuch broader nearoptimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity.
Optimal spatial adaptation for patchbased image denoising
 IEEE Trans. Image Process
, 2006
"... Abstract—A novel adaptive and patchbased approach is proposed for image denoising and representation. The method is based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel. Our contribution is to associate with each pixel the weighted sum of da ..."
Abstract

Cited by 114 (10 self)
 Add to MetaCart
(Show Context)
Abstract—A novel adaptive and patchbased approach is proposed for image denoising and representation. The method is based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel. Our contribution is to associate with each pixel the weighted sum of data points within an adaptive neighborhood, in a manner that it balances the accuracy of approximation and the stochastic error, at each spatial position. This method is general and can be applied under the assumption that there exists repetitive patterns in a local neighborhood of a point. By introducing spatial adaptivity, we extend the work earlier described by Buades et al. which can be considered as an extension of bilateral filtering to image patches. Finally, we propose a nearly parameterfree algorithm for image denoising. The method is applied to both artificially corrupted (white Gaussian noise) and real images and the performance is very close to, and in some cases even surpasses, that of the already published denoising methods. I.
Local adaptivity to variable smoothness for exemplarbased image denoising and representation
, 2005
"... ..."
2001) Nonlinear estimation in anisotropic multiindex denoising
 Probab. Theory Related Fields 121
"... In dimension one, it has long been observed that the minimax rates of convergences in the scale of Besov spaces present essentially two regimes (and a boundary): dense and the sparse zones. In this paper, we consider the problem of denoising a function depending of a multidimensional variable (for i ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
(Show Context)
In dimension one, it has long been observed that the minimax rates of convergences in the scale of Besov spaces present essentially two regimes (and a boundary): dense and the sparse zones. In this paper, we consider the problem of denoising a function depending of a multidimensional variable (for instance an image), with anisotropic constraints of regularity (especially providing a possible disparity of the inhomogeneous aspect in the different directions). The case of the dense zone has been investigated in the former paper [5]. Here, our aim is to investigate the case of the sparse region. This case is more delicate in some aspects. For instance, it was an open question to decide whether this sparse case, in the d dimensional context has to be split into different regions corresponding to different minimax rates. We will see here that the answer in negative: we still observe a sparse region but with a unique minimax behavior, except, as usual, on the boundary. It is worthwhile to notice that our estimation procedure admits the choice of its parameters under which it is adaptive up to logarithmic factor in the ”dense case” ([5]) and minimax adaptive in the ”sparse case”. It is also interesting to observe that in the ”sparse case”, the embedding properties of the spaces are fondamental. Key words and phrases: nonparametric estimation, denoising, anisotropic smoothness, minimax rate of convergence, curse of dimensionality, anisotropic Besov spaces 1
Limit distribution theory for maximum likelihood estimation of a logconcave density
, 2008
"... We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a logconcave density, i.e. a density of the form f0 = expϕ0 where ϕ0 is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and ..."
Abstract

Cited by 24 (9 self)
 Add to MetaCart
We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a logconcave density, i.e. a density of the form f0 = expϕ0 where ϕ0 is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log–concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, ∞) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions
An alternative point of view on Lepski's method
, 1999
"... Lepski's method is a method for choosing a "best" estimator (in an appropriate sense) among a family of those, under suitable restrictions on this family. The subject of this paper is to give a nonasymptotic presentation of Lepski's method in the context of Gaussian regression mo ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
Lepski's method is a method for choosing a "best" estimator (in an appropriate sense) among a family of those, under suitable restrictions on this family. The subject of this paper is to give a nonasymptotic presentation of Lepski's method in the context of Gaussian regression models for a collection of projection estimators on some nested family of finitedimensional linear subspaces. It is also shown that a suitable tuning of the method allows to asymptotically recover the best possible risk in the family.
Unsupervised patchbased image regularization and representation
 In Proc. Eur. Conf. Comp. Vis. (ECCV’06
, 2006
"... Abstract. A novel adaptive and patchbased approach is proposed for image regularization and representation. The method is unsupervised and based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel. The main idea is to associate with each pixel th ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
(Show Context)
Abstract. A novel adaptive and patchbased approach is proposed for image regularization and representation. The method is unsupervised and based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel. The main idea is to associate with each pixel the weighted sum of data points within an adaptive neighborhood and to use image patches to take into account complex spatial interactions in images. In this paper, we consider the problem of the adaptive neighborhood selection in a manner that it balances the accuracy of the estimator and the stochastic error, at each spatial position. Moreover, we propose a practical algorithm with no hidden parameter for image regularization that uses no library of image patches and no training algorithm. The method is applied to both artificially corrupted and real images and the performance is very close, and in some cases even surpasses, to that of the best published denoising methods. 1
Exact adaptive pointwise estimation on Sobolev classes of densities
 ESAIM Probab. Statist
, 2001
"... Abstract. The subject of this paper is to estimate adaptively the common probability density of n independent, identically distributed random variables. The estimation is done at a xed point x0 2 R, over the density functions that belong to the Sobolev class Wn (;L). We consider the adaptive problem ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The subject of this paper is to estimate adaptively the common probability density of n independent, identically distributed random variables. The estimation is done at a xed point x0 2 R, over the density functions that belong to the Sobolev class Wn (;L). We consider the adaptive problem setup, where the regularity parameter is unknown and varies in a given set Bn. A sharp adaptive estimator is obtained, and the explicit asymptotical constant, associated to its rate of convergence is found. Mathematics Subject Classication. 62N01, 62N02, 62G20.
ESTIMATOR SELECTION WITH RESPECT TO HELLINGERTYPE RISKS
, 2009
"... We observe a random measure N and aim at estimating its intensity s. This statistical framework allows to deal simultaneously with the problems of estimating a density, the marginals of a multivariate distribution, the mean of a random vector with nonnegative components and the intensity of a Poiss ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
We observe a random measure N and aim at estimating its intensity s. This statistical framework allows to deal simultaneously with the problems of estimating a density, the marginals of a multivariate distribution, the mean of a random vector with nonnegative components and the intensity of a Poisson process. Our estimation strategy is based on estimator selection. Given a family of estimators of s based on the observation of N, we propose a selection rule, based on N as well, in view of selecting among these. Little assumption is made on the collection of estimators. The procedure offers the possibility to perform model selection and also to select among estimators associated to different model selection strategies. Besides, it provides an alternative to the Testimators as studied recently in Birgé (2006). For illustration, we consider the problems of estimation and (complete) variable selection in various regression settings.