Results 1  10
of
14
Bayesian semiparametric Wiener system identification
, 2013
"... We present a novel method for Wiener system identification. The method relies on a semiparametric, i.e. a mixed parametric/nonparametric, model of a Wiener system. We use a statespace model for the linear dynamical system and a nonparametric Gaussian process model for the static nonlinearity. We av ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
We present a novel method for Wiener system identification. The method relies on a semiparametric, i.e. a mixed parametric/nonparametric, model of a Wiener system. We use a statespace model for the linear dynamical system and a nonparametric Gaussian process model for the static nonlinearity. We avoid making strong assumptions, such as monotonicity, on the nonlinear mapping. Stochastic disturbances, entering both as measurement noise and as process noise, are handled in a systematic manner. The nonparametric nature of the Gaussian process allows us to handle a wide range of nonlinearities without making problemspecific parameterizations. We also consider sparsitypromoting priors, based on generalized hyperbolic distributions, to automatically infer the order of the underlying dynamical system. We derive an inference algorithm based on an efficient particle Markov chain Monte Carlo method, referred to as particle Gibbs with ancestor sampling. The method is profiled on two challenging identification problems with good results. Blind Wiener system identification is handled as a special case.
Particle Gibbs with Ancestor Sampling
"... Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a new PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
(Show Context)
Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a new PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS). PGAS provides the data analyst with an offtheshelf class of Markov kernels that can be used to simulate, for instance, the typically highdimensional and highly autocorrelated state trajectory in a statespace model. The ancestor sampling procedure enables fast mixing of the PGAS kernel even when using seemingly few particles in the underlying SMC sampler. This is important as it can significantly reduce the computational burden that is typically associated with using SMC. PGAS is conceptually similar to the existing PG with backward simulation (PGBS) procedure. Instead of using separate forward and backward sweeps as in PGBS, however, we achieve the same effect in a single forward sweep. This makes PGAS well suited for addressing inference problems not only in statespace models, but also in models with more complex dependencies, such as nonMarkovian, Bayesian nonparametric, and general probabilistic graphical models.
Bayesian inference and learning in Gaussian process statespace models with particle MCMC
 In Advances in Neural Information Processing Systems (NIPS
, 2013
"... Statespace models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference and learning (i.e. state estimation and system identification) in nonlinear nonparametric statespace models. We ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Statespace models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference and learning (i.e. state estimation and system identification) in nonlinear nonparametric statespace models. We place a Gaussian process prior over the state transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena. To enable efficient inference, we marginalize over the transition dynamics function and, instead, infer directly the joint smoothing distribution using specially tailored Particle Markov Chain Monte Carlo samplers. Once a sample from the smoothing distribution is computed, the state transition predictive distribution can be formulated analytically. Our approach preserves the full nonparametric expressivity of the model and can make use of sparse Gaussian processes to greatly reduce computational complexity. 1
PARTICLE METROPOLIS HASTINGS USING LANGEVIN DYNAMICS
, 2013
"... Particle Markov Chain Monte Carlo (PMCMC) samplers allow for routine inference of parameters and states in challenging nonlinear problems. A common choice for the parameter proposal is a simple random walk sampler, which can scale poorly with the number of parameters. In this paper, we propose to us ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Particle Markov Chain Monte Carlo (PMCMC) samplers allow for routine inference of parameters and states in challenging nonlinear problems. A common choice for the parameter proposal is a simple random walk sampler, which can scale poorly with the number of parameters. In this paper, we propose to use loglikelihood gradients, i.e. the score, in the construction of the proposal, akin to the Langevin Monte Carlo method, but adapted to the PMCMC framework. This can be thought of as a way to guide a random walk proposal by using drift terms that are proportional to the score function. The method is successfully applied to a stochastic volatility model and the drift term exhibits intuitive behaviour.
ADAPTIVE STOPPING FOR FAST PARTICLE SMOOTHING
"... Particle smoothing is useful for offline state inference and parameter learning in nonlinear/nonGaussian statespace models. However, many particle smoothers, such as the popular forward filter/backward simulator (FFBS), are plagued by a quadratic computational complexity in the number of particles ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Particle smoothing is useful for offline state inference and parameter learning in nonlinear/nonGaussian statespace models. However, many particle smoothers, such as the popular forward filter/backward simulator (FFBS), are plagued by a quadratic computational complexity in the number of particles. One approach to tackle this issue is to use rejectionsamplingbased FFBS (RSFFBS), which asymptotically reaches linear complexity. In practice, however, the constants can be quite large and the actual gain in computational time limited. In this contribution, we develop a hybrid method, governed by an adaptive stopping rule, in order to exploit the benefits, but avoid the drawbacks, of RSFFBS. The resulting particle smoother is shown in a simulation study to be considerably more computationally efficient than both FFBS and RSFFBS. Index Terms — Sequential Monte Carlo, particle smoothing, backward simulation. 1.
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva93461 ADAPTIVE STOPPING FOR FAST PARTICLE SMOOTHING
"... Particle smoothing is useful for offline state inference and parameter learning in nonlinear/nonGaussian statespace models. However, many particle smoothers, such as the popular forward filter/backward simulator (FFBS), are plagued by a quadratic computational complexity in the number of particles ..."
Abstract
 Add to MetaCart
(Show Context)
Particle smoothing is useful for offline state inference and parameter learning in nonlinear/nonGaussian statespace models. However, many particle smoothers, such as the popular forward filter/backward simulator (FFBS), are plagued by a quadratic computational complexity in the number of particles. One approach to tackle this issue is to use rejectionsamplingbased FFBS (RSFFBS), which asymptotically reaches linear complexity. In practice, however, the constants can be quite large and the actual gain in computational time limited. In this contribution, we develop a hybrid method, governed by an adaptive stopping rule, in order to exploit the benefits, but avoid the drawbacks, of RSFFBS. The resulting particle smoother is shown in a simulation study to be considerably more computationally efficient than both FFBS and RSFFBS. Index Terms — Sequential Monte Carlo, particle smoothing, backward simulation. 1.
000 001
"... Address email Statespace models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference and learning in nonlinear nonparametric statespace models. We place a Gaussian process prior over t ..."
Abstract
 Add to MetaCart
(Show Context)
Address email Statespace models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference and learning in nonlinear nonparametric statespace models. We place a Gaussian process prior over the transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena. However, to enable efficient inference, we marginalize over the dynamics of the model and instead infer directly the joint smoothing distribution through the use of specially tailored Particle Markov Chain Monte Carlo samplers. Once an approximation of the smoothing distribution is computed, the state transition predictive distribution can be formulated analytically. We make use of sparse Gaussian process models to greatly reduce the computational complexity of the approach. 1