Results 1  10
of
136
Posterior consistency of Dirichlet mixtures in density estimation
 Ann. Statist
, 1999
"... A Dirichlet mixture of normal densities is a useful choice for a prior distribution on densities in the problem of Bayesian density estimation. In the recent years, efficient Markov chain Monte Carlo method for the computation of the posterior distribution has been developed. The method has been app ..."
Abstract

Cited by 120 (21 self)
 Add to MetaCart
A Dirichlet mixture of normal densities is a useful choice for a prior distribution on densities in the problem of Bayesian density estimation. In the recent years, efficient Markov chain Monte Carlo method for the computation of the posterior distribution has been developed. The method has been applied to data arising from different fields of interest. The important issue of consistency was however left open. In this paper, we settle this issue in affirmative. 1. Introduction. Recent
Convergence rates of posterior distributions
 Ann. Statist
, 2000
"... We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinitedimensional statistical models. We give general results on the rate of convergence of the posterior measure. These are applied to several examples, including priors on finite sieves, logspline models, D ..."
Abstract

Cited by 110 (19 self)
 Add to MetaCart
We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinitedimensional statistical models. We give general results on the rate of convergence of the posterior measure. These are applied to several examples, including priors on finite sieves, logspline models, Dirichlet processes and interval censoring. 1. Introduction. Suppose
Entropies and rates of convergence for maximum likelihood and Bayes estimation for mixtures of normal densities
 Ann. Statist
, 2001
"... We study the rates of convergence of the maximum likelihood estimator (MLE) and posterior distribution in density estimation problems, where the densities are location or locationscale mixtures of normal distributions with the scale parameter lying between two positive numbers. The true density is ..."
Abstract

Cited by 64 (12 self)
 Add to MetaCart
We study the rates of convergence of the maximum likelihood estimator (MLE) and posterior distribution in density estimation problems, where the densities are location or locationscale mixtures of normal distributions with the scale parameter lying between two positive numbers. The true density is also assumed to lie in this class with the true mixing distribution either compactly supported or having subGaussian tails. We obtain bounds for Hellinger bracketing entropies for this class, and from these bounds, we deduce the convergence rates of (sieve) MLEs in Hellinger distance. The rate turns out to be �log n � κ / √ n, where κ ≥ 1 is a constant that depends on the type of mixtures and the choice of the sieve. Next, we consider a Dirichlet mixture of normals as a prior on the unknown density. We estimate the prior probability of a certain KullbackLeibler type neighborhood and then invoke a general theorem that computes the posterior convergence rate in terms the growth rate of the Hellinger entropy and the concentration rate of the prior. The posterior distribution is also seen to converge at the rate �log n � κ / √ n in, where κ now depends on the tail behavior of the base measure of the Dirichlet process. 1. Introduction. A
Convergence rates of posterior distributions for noniid observations
 Ann. Statist
, 2007
"... We consider the asymptotic behavior of posterior distributions and Bayes estimators based on observations which are required to be neither independent nor identically distributed. We give general results on the rate of convergence of the posterior measure relative to distances derived from a testing ..."
Abstract

Cited by 57 (6 self)
 Add to MetaCart
We consider the asymptotic behavior of posterior distributions and Bayes estimators based on observations which are required to be neither independent nor identically distributed. We give general results on the rate of convergence of the posterior measure relative to distances derived from a testing criterion. We then specialize our results to independent, nonidentically distributed observations, Markov processes, stationary Gaussian time series and the white noise model. We apply our general results to several examples of infinitedimensional statistical models including nonparametric regression with normal errors, binary regression, Poisson regression, an interval censoring model, Whittle estimation of the spectral density of a time series and a nonlinear autoregressive model.: θ ∈ Θ) be a sequence of statistical experiments with observations X (n), where the parameter set Θ is arbitrary and n is an indexing parameter, usually the sample size. We put a prior distribution Πn on θ ∈ Θ and study the rate of convergence of the posterior
The interplay of bayesian and frequentist analysis
 Statist. Sci
, 2004
"... Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the fi ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the fight has become considerably muted, with the recognition that each approach has a great deal to contribute to statistical practice and each is actually essential for full development of the other approach. In this article, we embark upon a rather idiosyncratic walk through some of these issues. Key words and phrases: Admissibility; Bayesian model checking; conditional frequentist; confidence intervals; consistency; coverage; design; hierarchical models; nonparametric
Posterior Consistency in Nonparametric Regression Problems under Gaussian Process Priors
, 2004
"... Posterior consistency can be thought of as a theoretical justification of the Bayesian method. One of the most popular approaches to nonparametric Bayesian regression is to put a nonparametric prior distribution on the unknown regression function using Gaussian processes. In this paper, we study pos ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
Posterior consistency can be thought of as a theoretical justification of the Bayesian method. One of the most popular approaches to nonparametric Bayesian regression is to put a nonparametric prior distribution on the unknown regression function using Gaussian processes. In this paper, we study posterior consistency in nonparametric regression problems using Gaussian process priors. We use an extension of the theorem of Schwartz (1965) for nonidentically distributed observations, verifying its conditions when using Gaussian process priors for the regression function with normal or double exponential (Laplace) error distributions. We define a metric topology on the space of regression functions and then establish almost sure consistency of the posterior distribution. Our metric topology is weaker than the popular L 1 topology. With additional assumptions, we prove almost sure consistency when the regression functions have L 1 topologies. When the covariate (predictor) is assumed to be a random variable, we prove almost sure consistency for the joint density function of the response and predictor using the Hellinger metric.
Modeling Regression Error with a Mixture of Polya Trees
 Journal of the American Statistical Association
, 2001
"... We model the error distribution in the standard linear model as a mixture of absolutely continuous Polya trees constrained to have median zero. By considering a mixture, we smooth out the partitioning e ects of a simple Polya tree and the predictive error density has a derivative everywhere except z ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
We model the error distribution in the standard linear model as a mixture of absolutely continuous Polya trees constrained to have median zero. By considering a mixture, we smooth out the partitioning e ects of a simple Polya tree and the predictive error density has a derivative everywhere except zero. The error distribution is centered around a standard parametric family of distributions and may therefore be viewed as a generalization of standard models in which important, datadriven features, such as skewness and multimodality, are allowed. By marginalizing the Polya tree exact inference is possible up to MCMC error.
Dirichlet Process Mixtures of Generalized Linear Models
"... We propose Dirichlet Process mixtures of Generalized Linear Models (DPGLMs), a new method of nonparametric regression that accommodates continuous and categorical inputs, models a response variable locally by a generalized linear model. We give conditions for the existence and asymptotic unbiasedne ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
(Show Context)
We propose Dirichlet Process mixtures of Generalized Linear Models (DPGLMs), a new method of nonparametric regression that accommodates continuous and categorical inputs, models a response variable locally by a generalized linear model. We give conditions for the existence and asymptotic unbiasedness of the DPGLM regression mean function estimate; we then give a practical example for when those conditions hold. We evaluate DPGLM on several data sets, comparing it to modern methods of nonparametric regression including regression trees and Gaussian processes. 1