Results 1  10
of
24
Posterior Consistency in Nonparametric Regression Problems under Gaussian Process Priors
, 2004
"... Posterior consistency can be thought of as a theoretical justification of the Bayesian method. One of the most popular approaches to nonparametric Bayesian regression is to put a nonparametric prior distribution on the unknown regression function using Gaussian processes. In this paper, we study pos ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
Posterior consistency can be thought of as a theoretical justification of the Bayesian method. One of the most popular approaches to nonparametric Bayesian regression is to put a nonparametric prior distribution on the unknown regression function using Gaussian processes. In this paper, we study posterior consistency in nonparametric regression problems using Gaussian process priors. We use an extension of the theorem of Schwartz (1965) for nonidentically distributed observations, verifying its conditions when using Gaussian process priors for the regression function with normal or double exponential (Laplace) error distributions. We define a metric topology on the space of regression functions and then establish almost sure consistency of the posterior distribution. Our metric topology is weaker than the popular L 1 topology. With additional assumptions, we prove almost sure consistency when the regression functions have L 1 topologies. When the covariate (predictor) is assumed to be a random variable, we prove almost sure consistency for the joint density function of the response and predictor using the Hellinger metric.
Adaptive Bayesian estimation using a Gaussian random field with inverse Gamma bandwidth
 THE ANNALS OF STATISTICS
, 2009
"... We consider nonparametric Bayesian estimation inference using a rescaled smooth Gaussian field as a prior for a multidimensional function. The rescaling is achieved using a Gamma variable and the procedure can be viewed as choosing an inverse Gamma bandwidth. The procedure is studied from a frequent ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
We consider nonparametric Bayesian estimation inference using a rescaled smooth Gaussian field as a prior for a multidimensional function. The rescaling is achieved using a Gamma variable and the procedure can be viewed as choosing an inverse Gamma bandwidth. The procedure is studied from a frequentist perspective in three statistical settings involving replicated observations (density estimation, regression and classification). We prove that the resulting posterior distribution shrinks to the distribution that generates the data at a speed which is minimaxoptimal up to a logarithmic factor, whatever the regularity level of the datagenerating distribution. Thus the hierachical Bayesian procedure, with a fixed prior, is shown to be fully adaptive.
Bernstein Von Mises Theorem for linear functionals of the density
, 2009
"... In this paper, we study the asymptotic posterior distribution of linear functionals of the density. In particular, we give general conditions to obtain a semiparametric version of the BernsteinVon Mises theorem. We then apply this general result to nonparametric priors based on infinite dimensional ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
In this paper, we study the asymptotic posterior distribution of linear functionals of the density. In particular, we give general conditions to obtain a semiparametric version of the BernsteinVon Mises theorem. We then apply this general result to nonparametric priors based on infinite dimensional exponential families. As a byproduct, we also derive adaptive nonparametric rates of concentration of the posterior distributions under these families of priors on the class of Sobolev and Besov spaces.
Adaptive Bayesian density estimation with locationscale mixtures
 Electron. J. Statist
"... Abstract: We study convergence rates of Bayesian density estimators based on finite locationscale mixtures of exponential power distributions. We construct approximations of βHölder densities be continuous mixtures of exponential power distributions, leading to approximations of the βHölder dens ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
(Show Context)
Abstract: We study convergence rates of Bayesian density estimators based on finite locationscale mixtures of exponential power distributions. We construct approximations of βHölder densities be continuous mixtures of exponential power distributions, leading to approximations of the βHölder densities by finite mixtures. These results are then used to derive posterior concentration rates, with priors based on these mixture models. The rates are minimax (up to a log n term) and since the priors are independent of the smoothness the rates are adaptive to the smoothness.
Adaptive Bayesian multivariate density estimation with Dirichlet mixtures
, 2013
"... We show that rateadaptive multivariate density estimation can be performed using Bayesian methods based on Dirichlet mixtures of normal kernels with a prior distribution on the kernel’s covariance matrix parameter. We derive sufficient conditions on the prior specification that guarantee convergen ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
We show that rateadaptive multivariate density estimation can be performed using Bayesian methods based on Dirichlet mixtures of normal kernels with a prior distribution on the kernel’s covariance matrix parameter. We derive sufficient conditions on the prior specification that guarantee convergence to a true density at a rate that is minimax optimal for the smoothness class to which the true density belongs. No prior knowledge of smoothness is assumed. The sufficient conditions are shown to hold for the Dirichlet location mixtureofnormals prior with a Gaussian base measure and an inverse Wishart prior on the covariance matrix parameter. Locally Hölder smoothness classes and their anisotropic extensions are considered. Our study involves several technical novelties, including sharp approximation of finitely differentiablemultivariate densities by normal mixtures and a new sieve on the space of such densities.
On Bayesian adaptation
 Acta Appl. Math
, 2003
"... Summary: We consider estimating a probability density p based on a random sample from this density by a Bayesian approach. The prior is constructed in two steps, by first constructing priors on a collection of models each expressing a qualitative prior guess on the true density, and next combining t ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
Summary: We consider estimating a probability density p based on a random sample from this density by a Bayesian approach. The prior is constructed in two steps, by first constructing priors on a collection of models each expressing a qualitative prior guess on the true density, and next combining these priors in an overall prior by attaching prior weights to the models. The purpose is to show that the posterior distribution contracts to the true distribution at a rate that is (nearly) equal to the rate that would have been obtained had only the model that is most suitable for the true density been used. We study special model weights that yield this adaptation property in some generality. Examples include minimal discrete priors and finitedimensional models, with special attention to scales of Banach spaces, such as HOlder spaces, spline models, and classes of densities that are not uniformly bounded away from zero or infinity. 1
Convergence rates for Bayesian density estimation of infinitedimensional exponential families
 Ann. Statist
, 2006
"... We study the rate of convergence of posterior distributions in density estimation problems for logdensities in periodic Sobolev classes characterized by a smoothness parameter p. The posterior expected density provides a nonparametric estimation procedure attaining the optimal minimax rate of conve ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
We study the rate of convergence of posterior distributions in density estimation problems for logdensities in periodic Sobolev classes characterized by a smoothness parameter p. The posterior expected density provides a nonparametric estimation procedure attaining the optimal minimax rate of convergence under Hellinger loss if the posterior distribution achieves the optimal rate over certain uniformity classes. A prior on the density class of interest is induced by a prior on the coefficients of the trigonometric series expansion of the logdensity. We show that when p is known, the posterior distribution of a Gaussian prior achieves the optimal rate provided the prior variances die off sufficiently rapidly. For a mixture of normal distributions, the mixing weights on the dimension of the exponential family are assumed to be bounded below by an exponentially decreasing sequence. To avoid the use of infinite bases, we develop priors that cut off the series at a samplesizedependent truncation point. When the degree of smoothness is unknown, a finite mixture of normal priors indexed by the smoothness parameter, which is also assigned a prior, produces the best rate. A rateadaptive estimator is derived.
Adaptive estimation of multivariate functions using conditionally Gaussian tensorproduct spline priors,” Electronic
 Journal of Statistics
, 2012
"... Abstract: We investigate posterior contraction rates for priors on multivariate functions that are constructed using tensorproduct Bspline expansions. We prove that using a hierarchical prior with an appropriate prior distribution on the partition size and Gaussian prior weights on the Bspline co ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract: We investigate posterior contraction rates for priors on multivariate functions that are constructed using tensorproduct Bspline expansions. We prove that using a hierarchical prior with an appropriate prior distribution on the partition size and Gaussian prior weights on the Bspline coefficients, procedures can be obtained that adapt to the degree of smoothness of the unknown function up to the order of the splines that are used. We take a unified approach including important nonparametric statistical settings like density estimation, regression, and classification.