Results 1  10
of
44
GPBayesFilters: Bayesian Filtering Using Gaussian Process Prediction and Observation Models
 in Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS
, 2008
"... Abstract — Bayesian filtering is a general framework for recursively estimating the state of a dynamical system. The most common instantiations of Bayes filters are Kalman filters (extended and unscented) and particle filters. Key components of each Bayes filter are probabilistic prediction and obse ..."
Abstract

Cited by 66 (5 self)
 Add to MetaCart
Abstract — Bayesian filtering is a general framework for recursively estimating the state of a dynamical system. The most common instantiations of Bayes filters are Kalman filters (extended and unscented) and particle filters. Key components of each Bayes filter are probabilistic prediction and observation models. Recently, Gaussian processes have been introduced as a nonparametric technique for learning such models from training data. In the context of unscented Kalman filters, these models have been shown to provide estimates that can be superior to those achieved with standard, parametric models. In this paper we show how Gaussian process models can be integrated into other Bayes filters, namely particle filters and extended Kalman filters. We provide a complexity analysis of these filters and evaluate the alternative techniques using data collected with an autonomous microblimp. I.
Nonstationary Gaussian Process Regression using Point Estimates of Local Smoothness
"... Abstract. Gaussian processes using nonstationary covariance functions are a powerful tool for Bayesian regression with inputdependent smoothness. A common approach is to model the local smoothness by a latent process that is integrated over using Markov chain Monte Carlo approaches. In this paper, ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Gaussian processes using nonstationary covariance functions are a powerful tool for Bayesian regression with inputdependent smoothness. A common approach is to model the local smoothness by a latent process that is integrated over using Markov chain Monte Carlo approaches. In this paper, we demonstrate that an approximation that uses the estimated mean of the local smoothness yields good results and allows one to employ efficient gradientbased optimization techniques for jointly learning the parameters of the latent and the observed processes. Extensive experiments on both synthetic and realworld data, including challenging problems in robotics, show the relevance and feasibility of our approach. 1
Gaussian process regression networks
 In Proceedings of the 29th International Conference on Machine Learning (ICML
, 2012
"... We introduce a new regression framework, Gaussian process regression networks (GPRN), which combines the structural properties of Bayesian neural networks with the nonparametric flexibility of Gaussian processes. GPRN accommodates input (predictor) dependent signal and noise correlations between ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We introduce a new regression framework, Gaussian process regression networks (GPRN), which combines the structural properties of Bayesian neural networks with the nonparametric flexibility of Gaussian processes. GPRN accommodates input (predictor) dependent signal and noise correlations between multiple output (response) variables, input dependent lengthscales and amplitudes, and heavytailed predictive distributions. We derive both elliptical slice sampling and variational Bayes inference procedures for GPRN. We apply GPRN as a multiple output regression and multivariate volatility model, demonstrating substantially improved performance over eight popular multiple output (multitask) Gaussian process models and three multivariate volatility models on real datasets, including a 1000 dimensional gene expression dataset. 1.
Variational Bayesian optimization for runtime risksensitive control
 In Robotics: Science and Systems VIII (RSS
, 2012
"... Abstract—We present a new Bayesian policy search algorithm suitable for problems with policydependent cost variance, a property present in many robot control tasks. We extend recent work on variational heteroscedastic Gaussian processes to the optimization case to achieve efficient minimization of ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Abstract—We present a new Bayesian policy search algorithm suitable for problems with policydependent cost variance, a property present in many robot control tasks. We extend recent work on variational heteroscedastic Gaussian processes to the optimization case to achieve efficient minimization of very noisy cost signals. In contrast to most policy search algorithms, our method explicitly models the cost variance in regions of low expected cost and permits runtime adjustment of risk sensitivity without relearning. Our experiments with artificial systems and a real mobile manipulator demonstrate that flexible risksensitive policies can be learned in very few trials. I.
Gaussian process methods for estimating cortical maps
 NeuroImage
, 2011
"... A striking feature of cortical organization is that the encoding of many stimulus features, for example orientation or direction selectivity, is arranged into topographic maps. Functional imaging methods such as optical imaging of intrinsic signals, voltage sensitive dye imaging or functional magne ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
A striking feature of cortical organization is that the encoding of many stimulus features, for example orientation or direction selectivity, is arranged into topographic maps. Functional imaging methods such as optical imaging of intrinsic signals, voltage sensitive dye imaging or functional magnetic resonance imaging are important tools for studying the structure of cortical maps. As functional imaging measurements are usually noisy, statistical processing of the data is necessary to extract maps from the imaging data. We here present a probabilistic model of functional imaging data based on Gaussian processes. In comparison to conventional approaches, our model yields superior estimates of cortical maps from smaller amounts of data. In addition, we obtain quantitative uncertainty estimates, i.e. error bars on properties of the estimated map. We use our probabilistic model to study the coding properties of the map and the role of noisecorrelations by decoding the stimulus from single trials of an imaging experiment.
Variable risk dynamic mobile manipulation
 In RSS 2012 Workshop on Mobile Manipulation
, 2012
"... Abstract—The ability to operate effectively in a variety of contexts will be a critical attribute of deployed mobile manipulators. In general, a variety of properties, such as battery charge, workspace constraints, and the presence of dangerous obstacles, will determine the suitability of particular ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
Abstract—The ability to operate effectively in a variety of contexts will be a critical attribute of deployed mobile manipulators. In general, a variety of properties, such as battery charge, workspace constraints, and the presence of dangerous obstacles, will determine the suitability of particular control policies. Some context changes will cause shifts in risk sensitivity, or tendency to seek or avoid policies with high performance variation. We describe a policy search algorithm designed to address the problem of variable risk control. We generalize the simple stochastic gradient descent update to the risksensitive case, and show that, under certain conditions, it leads to an unbiased estimate of the gradient of the risksensitive objective. We show that the local critic structure used in the update can be exploited to interweave offline and online search to select local greedy policies or quickly change risk sensitivity. We evaluate the algorithm in experiments with a dynamically stable mobile manipulator lifting a heavy liquidfilled bottle while balancing. I.
Actively Learning LevelSets of Composite Functions
"... Scientists frequently have multiple types of experiments and data sets on which they can test the validity of their parameterized models and locate plausible regions for the model parameters. By examining multiple data sets, scientists can obtain inferences which typically are much more informative ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Scientists frequently have multiple types of experiments and data sets on which they can test the validity of their parameterized models and locate plausible regions for the model parameters. By examining multiple data sets, scientists can obtain inferences which typically are much more informative than the deductions derived from each of the data sources independently. Several standard data combination techniques result in target functions which are a weighted sum of the observed data sources. Thus, computing constraints on the plausible regions of the model parameter space can be formulated as finding a level set of a target function which is the sum of observable functions. We propose an active learning algorithm for this problem which selects both a a sample (from the parameter space) and an observable function upon which to compute the next sample. Empirical tests on synthetic functions and on real data for an eight parameter cosmological model show that our algorithm significantly reduces the number of samples required to identify the desired levelset. 1.
GPSABC: Gaussian Process Surrogate Approximate Bayesian Computation
"... Scientists often express their understanding of the world through a computationally demanding simulation program. Analyzing the posterior distribution of the parameters given observations (the inverse problem) can be extremely challenging. The Approximate Bayesian Computation (ABC) framework is t ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Scientists often express their understanding of the world through a computationally demanding simulation program. Analyzing the posterior distribution of the parameters given observations (the inverse problem) can be extremely challenging. The Approximate Bayesian Computation (ABC) framework is the standard statistical tool to handle these likelihood free problems, but they require a very large number of simulations. In this work we develop two new ABC sampling algorithms that significantly reduce the number of simulations necessary for posterior inference. Both algorithms use confidence estimates for the accept probability in the Metropolis Hastings step to adaptively choose the number of necessary simulations. Our GPSABC algorithm stores the information obtained from every simulation in a Gaussian process which acts as a surrogate function for the simulated statistics. Experiments on a challenging realistic biological problem illustrate the potential of these algorithms. 1
Learning nonstationary system dynamics online using gaussian processes
 In Pattern Recognition
, 2010
"... Abstract. Gaussian processes are a powerful nonparametric framework for solving various regression problems. In this paper, we address the task of learning a Gaussian process model of nonstationary system dynamics in an online fashion. We propose an extension to previous models that can appropria ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Gaussian processes are a powerful nonparametric framework for solving various regression problems. In this paper, we address the task of learning a Gaussian process model of nonstationary system dynamics in an online fashion. We propose an extension to previous models that can appropriately handle outdated training samples by decreasing their influence onto the predictive distribution. The resulting model estimates for each sample of the training set an individual noise level and thereby produces a mean shift towards more reliable observations. As a result, our model improves the prediction accuracy in the context of nonstationary function approximation and can furthermore detect outliers based on the resulting noise level. Our approach is easy to implement and is based upon standard Gaussian process techniques. We demonstrate in a realworld application that our algorithm, in which it learns the system dynamics of a miniature blimp, benefits from individual noise levels and outperforms standard methods.
Variable Risk Control via Stochastic Optimization
"... We present new global and local policy search algorithms suitable for problems with policydependent cost variance (or risk), a property present in many robot control tasks. These algorithms exploit new techniques in nonparameteric heteroscedastic regression to directly model the policydependent di ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We present new global and local policy search algorithms suitable for problems with policydependent cost variance (or risk), a property present in many robot control tasks. These algorithms exploit new techniques in nonparameteric heteroscedastic regression to directly model the policydependent distribution of cost. For local search, the learned cost model can be used as a critic for performing risksensitive gradient descent. Alternatively, decisiontheoretic criteria can be applied to globally select policies to balance exploration and exploitation in a principled way, or to perform greedy minimization with respect to various risksensitive criteria. This separation of learning and policy selection permits variable risk control, where risk sensitivity can be flexibly adjusted and appropriate policies can be selected at runtime without relearning. We describe experiments in dynamic stabilization and manipulation with a mobile manipulator that demonstrate learning of flexible, risksensitive policies in very few trials. 1