Results 1  10
of
14
Backward simulation methods for monte carlo statistical inference. Foundations and Trends
 in Econometrics
, 2013
"... ..."
(Show Context)
Ancestor Sampling for Particle Gibbs
"... We present a novel method in the family of particle MCMC methods that we refer to as particle Gibbs with ancestor sampling (PGAS). Similarly to the existing PG with backward simulation (PGBS) procedure, we use backward sampling to (considerably) improve the mixing of the PG kernel. Instead of usin ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
(Show Context)
We present a novel method in the family of particle MCMC methods that we refer to as particle Gibbs with ancestor sampling (PGAS). Similarly to the existing PG with backward simulation (PGBS) procedure, we use backward sampling to (considerably) improve the mixing of the PG kernel. Instead of using separate forward and backward sweeps as in PGBS, however, we achieve the same effect in a single forward sweep. We apply the PGAS framework to the challenging class of nonMarkovian statespace models. We develop a truncation strategy of these models that is applicable in principle to any backwardsimulationbased method, but which is particularly well suited to the PGAS framework. In particular, as we show in a simulation study, PGAS can yield an orderofmagnitude improved accuracy relative to PGBS due to its robustness to the truncation error. Several application examples are discussed, including RaoBlackwellized particle smoothing and inference in degenerate statespace models. 1
ON THE USE OF BACKWARD SIMULATION IN THE PARTICLE GIBBS SAMPLER
"... The particle Gibbs (PG) sampler was introduced in [1] as a way to incorporate a particle filter (PF) in a Markov chain Monte Carlo (MCMC) sampler. The resulting method was shown to be an efficient tool for joint Bayesian parameter and state inference in nonlinear, nonGaussian statespace models. Ho ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
(Show Context)
The particle Gibbs (PG) sampler was introduced in [1] as a way to incorporate a particle filter (PF) in a Markov chain Monte Carlo (MCMC) sampler. The resulting method was shown to be an efficient tool for joint Bayesian parameter and state inference in nonlinear, nonGaussian statespace models. However, the mixing of the PG kernel can be very poor when there is severe degeneracy in the PF. Hence, the success of the PG sampler heavily relies on the, often unrealistic, assumption that we can implement a PF without suffering from any considerate degeneracy. However, as pointed out by Whiteley [2] in the discussion following [1], the mixing can be improved by adding a backward simulation step to the PG sampler. Here, we investigate this further, derive an explicit PG sampler with backward simulation (denoted PGBSi) and show that this indeed is a valid MCMC method. Furthermore, we show in a numerical example that backward simulation can lead to a considerable increase in performance over the standard PG sampler. Index Terms — Particle Markov chain Monte Carlo, particle filter, particle Gibbs, backward simulation, Gibbs sampling.
Bayesian semiparametric Wiener system identification
, 2013
"... We present a novel method for Wiener system identification. The method relies on a semiparametric, i.e. a mixed parametric/nonparametric, model of a Wiener system. We use a statespace model for the linear dynamical system and a nonparametric Gaussian process model for the static nonlinearity. We av ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
We present a novel method for Wiener system identification. The method relies on a semiparametric, i.e. a mixed parametric/nonparametric, model of a Wiener system. We use a statespace model for the linear dynamical system and a nonparametric Gaussian process model for the static nonlinearity. We avoid making strong assumptions, such as monotonicity, on the nonlinear mapping. Stochastic disturbances, entering both as measurement noise and as process noise, are handled in a systematic manner. The nonparametric nature of the Gaussian process allows us to handle a wide range of nonlinearities without making problemspecific parameterizations. We also consider sparsitypromoting priors, based on generalized hyperbolic distributions, to automatically infer the order of the underlying dynamical system. We derive an inference algorithm based on an efficient particle Markov chain Monte Carlo method, referred to as particle Gibbs with ancestor sampling. The method is profiled on two challenging identification problems with good results. Blind Wiener system identification is handled as a special case.
Particle Gibbs with Ancestor Sampling
"... Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a new PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
(Show Context)
Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a new PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS). PGAS provides the data analyst with an offtheshelf class of Markov kernels that can be used to simulate, for instance, the typically highdimensional and highly autocorrelated state trajectory in a statespace model. The ancestor sampling procedure enables fast mixing of the PGAS kernel even when using seemingly few particles in the underlying SMC sampler. This is important as it can significantly reduce the computational burden that is typically associated with using SMC. PGAS is conceptually similar to the existing PG with backward simulation (PGBS) procedure. Instead of using separate forward and backward sweeps as in PGBS, however, we achieve the same effect in a single forward sweep. This makes PGAS well suited for addressing inference problems not only in statespace models, but also in models with more complex dependencies, such as nonMarkovian, Bayesian nonparametric, and general probabilistic graphical models.
2014): “On Particle Methods for Parameter Estimation in StateSpace Models,” arXiv Working Paper
"... Abstract. Nonlinear nonGaussian statespace models are ubiquitous in statistics, econometrics, information engineering and signal processing. Particle methods, also known as Sequential Monte Carlo (SMC) methods, provide reliable numerical approximations to the associated state inference problems. ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. Nonlinear nonGaussian statespace models are ubiquitous in statistics, econometrics, information engineering and signal processing. Particle methods, also known as Sequential Monte Carlo (SMC) methods, provide reliable numerical approximations to the associated state inference problems. However, in most applications, the statespace model of interest also depends on unknown static parameters that need to be estimated from the data. In this context, standard particle methods fail and it is necessary to rely on more sophisticated algorithms. The aim of this paper is to present a comprehensive review of particle methods that have been proposed to perform static parameter estimation in statespace models. We discuss the advantages and limitations of these methods and illustrate their performance on simple models. Key words and phrases: Bayesian inference, maximum likelihood inference, particle filtering, Sequential Monte Carlo, statespace models. 1.
RaoBlackwellized particle smoothers for mixed linear/nonlinear statespace models
, 2011
"... ..."
(Show Context)
PARTICLE METROPOLIS HASTINGS USING LANGEVIN DYNAMICS
"... Particle Markov Chain Monte Carlo (PMCMC) samplers allow for routine inference of parameters and states in challenging nonlinear problems. A common choice for the parameter proposal is a simple random walk sampler, which can scale poorly with the number of parameters. In this paper, we propose to us ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Particle Markov Chain Monte Carlo (PMCMC) samplers allow for routine inference of parameters and states in challenging nonlinear problems. A common choice for the parameter proposal is a simple random walk sampler, which can scale poorly with the number of parameters. In this paper, we propose to use loglikelihood gradients, i.e. the score, in the construction of the proposal, akin to the Langevin Monte Carlo method, but adapted to the PMCMC framework. This can be thought of as a way to guide a random walk proposal by using drift terms that are proportional to the score function. The method is successfully applied to a stochastic volatility model and the drift term exhibits intuitive behaviour.
On the particle Gibbs sampler
, 2013
"... Abstract. The particle Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm which operates on the extended space of the auxiliary variables generated by an interacting particle system. In particular, it samples the discrete variables that determine the particle genealogy. We propose a coupli ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The particle Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm which operates on the extended space of the auxiliary variables generated by an interacting particle system. In particular, it samples the discrete variables that determine the particle genealogy. We propose a coupling construction between two particle Gibbs updates from different starting points, which is such that the coupling probability may be made arbitrary large by taking the particle system large enough. A direct consequence of this result is the uniform ergodicity of the Particle Gibbs Markov kernel. We discuss several algorithmic variations of Particle Gibbs, either proposed in the literature or original. For some of these variants we are able to prove that they dominate the original algorithm in asymptotic efficiency as measured by the variance of the central limit theorem’s limiting distribution. A detailed numerical study is provided to demonstrate the efficacy of Particle Gibbs and the proposed variants. 1.