• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Gaussian processes for machine learning (2006)

by C E RASMUSSEN, C WILLIAMS
Add To MetaCart

Tools

Sorted by:
Results 11 - 20 of 720
Next 10 →

Gaussian Processes for Object Categorization

by Ashish Kapoor, Kristen Grauman, Raquel Urtasun, Trevor Darrell - INT J COMPUT VIS , 2009
"... ..."
Abstract - Cited by 60 (6 self) - Add to MetaCart
Abstract not found

Elliptical slice sampling

by Iain Murray, Ryan Prescott Adams, David J. C. MacKay - JMLR: W&CP
"... Many probabilistic models introduce strong dependencies between variables using a latent multivariate Gaussian distribution or a Gaussian process. We present a new Markov chain Monte Carlo algorithm for performing inference in models with multivariate Gaussian priors. Its key properties are: 1) it h ..."
Abstract - Cited by 60 (8 self) - Add to MetaCart
Many probabilistic models introduce strong dependencies between variables using a latent multivariate Gaussian distribution or a Gaussian process. We present a new Markov chain Monte Carlo algorithm for performing inference in models with multivariate Gaussian priors. Its key properties are: 1) it has simple, generic code applicable to many models, 2) it has no free parameters, 3) it works well for a variety of Gaussian process based models. These properties make our method ideal for use while model building, removing the need to spend time deriving and tuning updates for more complex algorithms.

Learning Stable Nonlinear Dynamical Systems With Gaussian Mixture Models

by S. Mohammad Khansari-zadeh, Aude Billard
"... Abstract—This paper presents a method to learn discrete robot motions from a set of demonstrations. We model a motion as a nonlinear autonomous (i.e., time-invariant) dynamical system (DS) and define sufficient conditions to ensure global asymptotic stability at the target. We propose a learning met ..."
Abstract - Cited by 59 (17 self) - Add to MetaCart
Abstract—This paper presents a method to learn discrete robot motions from a set of demonstrations. We model a motion as a nonlinear autonomous (i.e., time-invariant) dynamical system (DS) and define sufficient conditions to ensure global asymptotic stability at the target. We propose a learning method, which is called Stable Estimator of Dynamical Systems (SEDS), to learn the parameters of the DS to ensure that all motions closely follow the demonstrations while ultimately reaching and stopping at the target. Timeinvariance and global asymptotic stability at the target ensures that the system can respond immediately and appropriately to perturbations that are encountered during the motion. The method is evaluated through a set of robot experiments and on a library of human handwriting motions. Index Terms—Dynamical systems (DS), Gaussian mixture model, imitation learning, point-to-point motions, stability analysis. I.
(Show Context)

Citation Context

...ms (DS) have been advocated as a powerful alternative to modeling robot motions [5], [14]. Existing approaches to the statistical estimation of f in Eq. 2 use either Gaussian Process Regression (GPR) =-=[15]-=-, Locally Weighted Projection Regression (LWPR) [16], or Gaussian Mixture Regression (GMR) [14] where the parameters of the Gaussian Mixture are optimized through Expectation Maximization (EM) [17]. G...

Slice sampling covariance hyperparameters of latent Gaussian models

by Iain Murray, Ryan Prescott Adams - IN ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 23 , 2010
"... The Gaussian process (GP) is a popular way to specify dependencies between random variables in a probabilistic model. In the Bayesian framework the covariance structure can be specified using unknown hyperparameters. Integrating over these hyperparameters considers different possible explanations fo ..."
Abstract - Cited by 56 (10 self) - Add to MetaCart
The Gaussian process (GP) is a popular way to specify dependencies between random variables in a probabilistic model. In the Bayesian framework the covariance structure can be specified using unknown hyperparameters. Integrating over these hyperparameters considers different possible explanations for the data when making predictions. This integration is often performed using Markov chain Monte Carlo (MCMC) sampling. However, with non-Gaussian observations standard hyperparameter sampling approaches require careful tuning and may converge slowly. In this paper we present a slice sampling approach that requires little tuning while mixing well in both strong- and weak-data regimes.

Beam Sampling for the Infinite Hidden Markov Model

by Jurgen Van Gael, Yunus Saatci, Yee Whye Teh, Zoubin Ghahramani
"... The infinite hidden Markov model is a nonparametric extension of the widely used hidden Markov model. Our paper introduces a new inference algorithm for the infinite Hidden Markov model called beam sampling. Beam sampling combines slice sampling, which limits the number of states considered at each ..."
Abstract - Cited by 52 (8 self) - Add to MetaCart
The infinite hidden Markov model is a nonparametric extension of the widely used hidden Markov model. Our paper introduces a new inference algorithm for the infinite Hidden Markov model called beam sampling. Beam sampling combines slice sampling, which limits the number of states considered at each time step to a finite number, with dynamic programming, which samples whole state trajectories efficiently. Our algorithm typically outperforms the Gibbs sampler and is more robust. We present applications of iHMM inference using the beam sampler on changepoint detection and text prediction problems. 1.
(Show Context)

Citation Context

... of finite models is replaced with Bayesian inference over the size of submodel used to explain data. Examples of successful applications of nonparametric Bayesian methods include Gaussian Processes (=-=Rasmussen & Williams, 2005-=-) for regression and classification, Dirichlet Process (DP) mixture models (Escobar & West, 1995; Rasmussen, 2000) for clustering heterogeneous data and density estimation, Indian Buffet Processes for...

Nonmyopic active learning of gaussian processes: An exploration-exploitation approach

by Andreas Krause, Carlos Guestrin - IN ICML , 2007
"... When monitoring spatial phenomena, such as the ecological condition of a river, deciding where to make observations is a challenging task. In these settings, a fundamental question is when an active learning, or sequential design, strategy, where locations are selected based on previous measurements ..."
Abstract - Cited by 51 (5 self) - Add to MetaCart
When monitoring spatial phenomena, such as the ecological condition of a river, deciding where to make observations is a challenging task. In these settings, a fundamental question is when an active learning, or sequential design, strategy, where locations are selected based on previous measurements, will perform significantly better than sensing at an a priori specified set of locations. For Gaussian Processes (GPs), which often accurately model spatial phenomena, we present an analysis and efficient algorithms that address this question. Central to our analysis is a theoretical bound which quantifies the performance difference between active and a priori design strategies. We consider GPs with unknown kernel parameters and present a nonmyopic approach for trading off exploration, i.e., decreasing uncertainty about the model parameters, and exploitation, i.e., near-optimally selecting observations when the parameters are (approximately) known. We discuss several exploration strategies, and present logarithmic sample complexity bounds for the exploration phase. We then extend our algorithm to handle nonstationary GPs exploiting local structure in the model. A variational approach allows us to perform efficient inference in this class of nonstationary models. We also present extensive empirical evaluation on several real-world problems.
(Show Context)

Citation Context

... Samples of pH acquired along horizontal transect. one needs a model of the spatial phenomenon itself. Gaussian processes (GPs) have been shown to be effective models for this purpose (Cressie, 1991; =-=Rasmussen & Williams, 2006-=-). Most previous work on observation selection in GPs has considered the a priori design problem, in which the locations are selected in advance prior to making observations (c.f., Guestrin et al. (20...

Relational learning with Gaussian processes

by Wei Chu, Vikas Sindhwani, Zoubin Ghahramani, S. Sathiya Keerthi - In NIPS 19 , 2007
"... Correlation between instances is often modelled via a kernel function using in-put attributes of the instances. Relational knowledge can further reveal additional pairwise correlations between variables of interest. In this paper, we develop a class of models which incorporates both reciprocal relat ..."
Abstract - Cited by 45 (10 self) - Add to MetaCart
Correlation between instances is often modelled via a kernel function using in-put attributes of the instances. Relational knowledge can further reveal additional pairwise correlations between variables of interest. In this paper, we develop a class of models which incorporates both reciprocal relational information and in-put attributes using Gaussian process techniques. This approach provides a novel non-parametric Bayesian framework with a data-dependent covariance function for supervised learning tasks. We also apply this framework to semi-supervised learning. Experimental results on several real world data sets verify the usefulness of this algorithm. 1
(Show Context)

Citation Context

...w instances in the entire input space are correlated. In this paper, we integrate relational information with input attributes in a non-parametric Bayesian framework based on Gaussian processes (GP) (=-=Rasmussen & Williams, 2006-=-), which leads to a data-dependent covariance/kernel function. We highlight the following aspects of our approach: 1) We propose a novel likelihood function for undirected linkages and carry out appro...

Most likely heteroscedastic gaussian process regression

by Kristian Kersting, Christian Plagemann, Patrick Pfaff, Wolfram Burgard - In International Conference on Machine Learning (ICML , 2007
"... This paper presents a novel Gaussian pro-cess (GP) approach to regression with input-dependent noise rates. We follow Gold-berg et al.’s approach and model the noise variance using a second GP in addition to the GP governing the noise-free output value. In contrast to Goldberg et al., however, we do ..."
Abstract - Cited by 44 (3 self) - Add to MetaCart
This paper presents a novel Gaussian pro-cess (GP) approach to regression with input-dependent noise rates. We follow Gold-berg et al.’s approach and model the noise variance using a second GP in addition to the GP governing the noise-free output value. In contrast to Goldberg et al., however, we do not use a Markov chain Monte Carlo method to approximate the posterior noise variance but a most likely noise approach. The re-sulting model is easy to implement and can directly be used in combination with various existing extensions of the standard GPs such as sparse approximations. Extensive experi-ments on both synthetic and real-world data, including a challenging perception problem in robotics, show the effectiveness of most likely heteroscedastic GP regression. 1.
(Show Context)

Citation Context

...non-stationary covariance functions, and sparse approximation can directly be be adapted. In the present paper, we will exemplify this by combining our model with the projected process approximation (=-=Rasmussen & Williams, 2006-=-), which only represents a small subset of the data for parameter estimation and inference. As our experiments show, this can keep memory consumption low and speed up computations tremendously. Aside ...

A tutorial on Bayesian nonparametric models.

by Samuel J Gershman , David M Blei - Journal of Mathematical Psychology, , 2012
"... Abstract A key problem in statistical modeling is model selection, how to choose a model at an appropriate level of complexity. This problem appears in many settings, most prominently in choosing the number of clusters in mixture models or the number of factors in factor analysis. In this tutorial ..."
Abstract - Cited by 42 (9 self) - Add to MetaCart
Abstract A key problem in statistical modeling is model selection, how to choose a model at an appropriate level of complexity. This problem appears in many settings, most prominently in choosing the number of clusters in mixture models or the number of factors in factor analysis. In this tutorial we describe Bayesian nonparametric methods, a class of methods that side-steps this issue by allowing the data to determine the complexity of the model. This tutorial is a high-level introduction to Bayesian nonparametric methods and contains several examples of their application.

Gaussian processes and reinforcement learning for identification and control of an autonomous blimp

by Jonathan Ko, Daniel J. Klein - in IEEE Intl. Conf. on Robotics and Automation (ICRA , 2007
"... Abstract — Blimps are a promising platform for aerial robotics and have been studied extensively for this purpose. Unlike other aerial vehicles, blimps are relatively safe and also possess the ability to loiter for long periods. These advantages, however, have been difficult to exploit because blimp ..."
Abstract - Cited by 40 (8 self) - Add to MetaCart
Abstract — Blimps are a promising platform for aerial robotics and have been studied extensively for this purpose. Unlike other aerial vehicles, blimps are relatively safe and also possess the ability to loiter for long periods. These advantages, however, have been difficult to exploit because blimp dynamics are complex and inherently non-linear. The classical approach to system modeling represents the system as an ordinary differential equation (ODE) based on Newtonian principles. A more recent modeling approach is based on representing state transitions as a Gaussian process (GP). In this paper, we present a general technique for system identification that combines these two modeling approaches into a single formulation. This is done by training a Gaussian process on the residual between the non-linear model and ground truth training data. The result is a GP-enhanced model that provides an estimate of uncertainty in addition to giving better state predictions than either ODE or GP alone. We show how the GP-enhanced model can be used in conjunction with reinforcement learning to generate a blimp controller that is superior to those learned with ODE or GP models alone. I.
(Show Context)

Citation Context

...high dimensional spaces. Key advantages of GPs are their ability to provide uncertainty estimates and to learn the noise and smoothness parameters from training data. More information can be found in =-=[12]-=-. A GP can be thought of as a “Gaussian over functions”. More precisely, a GP describes a stochastic process in which the random variables, in this case the outputs of the modeled function, are jointl...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University