Results 1  10
of
42
Bayesian Experimental Design: A Review
 Statistical Science
, 1995
"... This paper reviews the literature on Bayesian experimental design, both for linear and nonlinear models. A unified view of the topic is presented by putting experimental design in a decision theoretic framework. This framework justifies many optimality criteria, and opens new possibilities. Various ..."
Abstract

Cited by 310 (1 self)
 Add to MetaCart
This paper reviews the literature on Bayesian experimental design, both for linear and nonlinear models. A unified view of the topic is presented by putting experimental design in a decision theoretic framework. This framework justifies many optimality criteria, and opens new possibilities. Various design criteria become part of a single, coherent approach.
Implementing approximate Bayesian inference for latent Gaussian models using integrated nested Laplace approximations: A manual for the inlaprogram
, 2008
"... Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemp ..."
Abstract

Cited by 293 (20 self)
 Add to MetaCart
Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemporal models, logGaussian Coxprocesses, geostatistical and geoadditive models. In this paper we consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with nonGaussian response variables. The posterior marginals are not available in closed form due to the nonGaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, both in terms of convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations
Prediction With Gaussian Processes: From Linear Regression To Linear Prediction And Beyond
 Learning and Inference in Graphical Models
, 1997
"... The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. Th ..."
Abstract

Cited by 231 (4 self)
 Add to MetaCart
The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. This leads in to a more general discussion of Gaussian processes in section 4. Section 5 deals with further issues, including hierarchical modelling and the setting of the parameters that control the Gaussian process, the covariance functions for neural network models and the use of Gaussian processes in classification problems. PREDICTION WITH GAUSSIAN PROCESSES: FROM LINEAR REGRESSION TO LINEAR PREDICTION AND BEYOND 2 1 Introduction In the last decade neural networks have been used to tackle regression and classification problems, with some notable successes. It has also been widely recognized that they form a part of a wide variety of nonlinear statistical techniques that can be used for...
Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification
, 1997
"... Abstract. Gaussian processes are a natural way of defining prior distributions over functions of one or more input variables. In a simple nonparametric regression problem, where such a function gives the mean of a Gaussian distribution for an observed response, a Gaussian process model can easily be ..."
Abstract

Cited by 153 (1 self)
 Add to MetaCart
Abstract. Gaussian processes are a natural way of defining prior distributions over functions of one or more input variables. In a simple nonparametric regression problem, where such a function gives the mean of a Gaussian distribution for an observed response, a Gaussian process model can easily be implemented using matrix computations that are feasible for datasets of up to about a thousand cases. Hyperparameters that define the covariance function of the Gaussian process can be sampled using Markov chain methods. Regression models where the noise has a t distribution and logistic or probit models for classification applications can be implemented by sampling as well for latent values underlying the observations. Software is now available that implements these methods using covariance functions with hierarchical parameterizations. Models defined in this way can discover highlevel properties of the data, such as which inputs are relevant to predicting the response. 1
Gaussian processes for ordinal regression
 Journal of Machine Learning Research
, 2004
"... We present a probabilistic kernel approach to ordinal regression based on Gaussian processes. A threshold model that generalizes the probit function is used as the likelihood function for ordinal variables. Two inference techniques, based on the Laplace approximation and the expectation propagation ..."
Abstract

Cited by 117 (4 self)
 Add to MetaCart
We present a probabilistic kernel approach to ordinal regression based on Gaussian processes. A threshold model that generalizes the probit function is used as the likelihood function for ordinal variables. Two inference techniques, based on the Laplace approximation and the expectation propagation algorithm respectively, are derived for hyperparameter learning and model selection. We compare these two Gaussian process approaches with a previous ordinal regression method based on support vector machines on some benchmark and realworld data sets, including applications of ordinal regression to collaborative filtering and gene expression analysis. Experimental results on these data sets verify the usefulness of our approach.
Bayesian Treed Gaussian Process Models with an Application to Computer Modeling
 Journal of the American Statistical Association
, 2007
"... This paper explores nonparametric and semiparametric nonstationary modeling methodologies that couple stationary Gaussian processes and (limiting) linear models with treed partitioning. Partitioning is a simple but effective method for dealing with nonstationarity. Mixing between full Gaussian proce ..."
Abstract

Cited by 87 (19 self)
 Add to MetaCart
This paper explores nonparametric and semiparametric nonstationary modeling methodologies that couple stationary Gaussian processes and (limiting) linear models with treed partitioning. Partitioning is a simple but effective method for dealing with nonstationarity. Mixing between full Gaussian processes and simple linear models can yield a more parsimonious spatial model while significantly reducing computational effort. The methodological developments and statistical computing details which make this approach efficient are described in detail. Illustrations of our model are given for both synthetic and real datasets. Key words: recursive partitioning, nonstationary spatial model, nonparametric regression, Bayesian model averaging 1
Bayesian Inference for the Uncertainty Distribution
, 2000
"... this paper we are interested in the case where the computer model is computationally expensive, to the extent that the Monte Carlo approach is not practical. This is because the sample of outputs will usually need to be large, to be certain of obtaining accurate inferences about the distribution of ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
this paper we are interested in the case where the computer model is computationally expensive, to the extent that the Monte Carlo approach is not practical. This is because the sample of outputs will usually need to be large, to be certain of obtaining accurate inferences about the distribution of Y . Thus we need to find a way of learning about the uncertainty distribution without having to run the algorithm a large number of times. We consider a Bayesian approach, which uses the information from each single evaluation to learn about the algorithm as a whole, and so reduces the total number of evaluations needed. One immediate question is whether deriving the distribution of Y should be of interest when the model is unlikely to predict reality correctly. Firstly we note that even a good model can be rendered ineffective by an unknown input, if the resulting uncertainty in Y is high. In general our goal is simply to quantify the information that is lost by not knowing the exact value of an input in a model. A decision to invest more resources in learning the true value of an input could follow from an uncertainty analysis. However, we have made the simplification here that the user of the model would only evaluate the model at X given the value of X. Even if X is known, the user may choose to run the model at a range of inputs, dependent on X,
Parameter space exploration with Gaussian process trees
 Proceedings of the International Conference on Machine L earning (pp. 353–360). Omnipress & ACM Digital Library
, 2004
"... Computer experiments often require dense sweeps over input parameters to obtain a qualitative understanding of their response. ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
Computer experiments often require dense sweeps over input parameters to obtain a qualitative understanding of their response.
Uncertainty in prior elicitations: a nonparametric approach
, 2005
"... A key task in the elicitation of expert knowledge is to construct a specific elicited distribution from the finite, and usually small, number of statements that the have been elicited from the expert. These statements typically specify some quantiles of the distribution, perhaps the mode and sometim ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
A key task in the elicitation of expert knowledge is to construct a specific elicited distribution from the finite, and usually small, number of statements that the have been elicited from the expert. These statements typically specify some quantiles of the distribution, perhaps the mode and sometimes the mean or other moments. Such statements are not enough to identify the expert’s probability distribution uniquely, and the usual approach is to fit some member of a convenient parametric family. There are two clear deficiencies in this solution. First, the expert’s beliefs are forced to fit the parametric family. Second, no account is then taken of the many other possible distributions that might have fitted the elicited statements equally well. We present an approach which tackles both of these deficiencies. Our model is nonparametric, allowing the expert’s distribution to take any continuous form. It also quantifies the uncertainty in the resulting elicited distribution. Formally, the expert’s density function is treated as an unknown function, about which we make inference. The result is a posterior distribution for the expert’s density function. The posterior mean serves as a ‘best fit ’ elicited distribution, while 1 the variance around this fit expresses the uncertainty in the elicitation. When data become available, uncertainty about the expert’s posterior distribution induced by the uncertainty in their prior distribution can then be described. We also briefly consider the issue of the imprecision in any elicited probability judgment, and suggest a modification of our model to account for this.