Results 1  10
of
174
On Sequential Monte Carlo Sampling Methods for Bayesian Filtering
 STATISTICS AND COMPUTING
, 2000
"... In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and nonGaussian. A general importance sampling framework is develop ..."
Abstract

Cited by 1051 (76 self)
 Add to MetaCart
(Show Context)
In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and nonGaussian. A general importance sampling framework is developed that unifies many of the methods which have been proposed over the last few decades in several different scientific disciplines. Novel extensions to the existing methods are also proposed. We show in particular how to incorporate local linearisation methods similar to those which have previously been employed in the deterministic filtering literature; these lead to very effective importance distributions. Furthermore we describe a method which uses RaoBlackwellisation in order to take advantage of the analytic structure present in some important classes of statespace models. In a final section we develop algorithms for prediction, smoothing and evaluation of the likelihood in dynamic models.
Implementing approximate Bayesian inference for latent Gaussian models using integrated nested Laplace approximations: A manual for the inlaprogram
, 2008
"... Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemp ..."
Abstract

Cited by 294 (20 self)
 Add to MetaCart
Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemporal models, logGaussian Coxprocesses, geostatistical and geoadditive models. In this paper we consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with nonGaussian response variables. The posterior marginals are not available in closed form due to the nonGaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, both in terms of convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations
A Survey of Convergence Results on Particle Filtering Methods for Practitioners
, 2002
"... Optimal filtering problems are ubiquitous in signal processing and related fields. Except for a restricted class of models, the optimal filter does not admit a closedform expression. Particle filtering methods are a set of flexible and powerful sequential Monte Carlo methods designed to solve the o ..."
Abstract

Cited by 247 (8 self)
 Add to MetaCart
Optimal filtering problems are ubiquitous in signal processing and related fields. Except for a restricted class of models, the optimal filter does not admit a closedform expression. Particle filtering methods are a set of flexible and powerful sequential Monte Carlo methods designed to solve the optimal filtering problem numerically. The posterior distribution of the state is approximated by a large set of Diracdelta masses (samples/particles) that evolve randomly in time according to the dynamics of the model and the observations. The particles are interacting; thus, classical limit theorems relying on statistically independent samples do not apply. In this paper, our aim is to present a survey of recent convergence results on this class of methods to make them accessible to practitioners.
Particle Filtering for Partially Observed Gaussian State Space Models
 J. R. Statist. Soc. B
, 2002
"... this paper, we shall concentrate on the following class of state space models.Let t 1; 2;... denote discrete time: then t x t1 B t v t F t u t ;x 0 .x 0 ;P 0 /; .1/ C t x t t " t t u t ;.2/ y t /;. 3/ where u t n u is an exogenous process and x t n x and y t n y ..."
Abstract

Cited by 72 (7 self)
 Add to MetaCart
this paper, we shall concentrate on the following class of state space models.Let t 1; 2;... denote discrete time: then t x t1 B t v t F t u t ;x 0 .x 0 ;P 0 /; .1/ C t x t t " t t u t ;.2/ y t /;. 3/ where u t n u is an exogenous process and x t n x and y t n y are unobserved processes. The sequences v t .0;I n v / n v " t .0;I n " / n " are independent identically distributed (IID) Gaussian.We assume that P 0 > 0; x 0 ;v t and w t are mutually independent for all t, and the model parameters # # . x 0 ;P 0;A t ;B t ;C t ;D t ;F t ;G t 1; 2;... / arekU wn.The processes .x t / and .y t / define a standard linear Gaussian state space model.We do not observe .y t / in our case, but .z t /.The observations .z t / are conditionally independent given the processes .x t / and .y t / and marginally distributed according to p.z t t /; it is assumed that p.z t t / can be evaluated pointwise up to a normalizing constant.Typically p.z t t / belongs to the exponential family.Alternatively z t may be a censored or quantized version of y t . This class of partially observed Gaussian state space models has numerous applications; many examples are discussed for instance in de Jong (1997), Manrique and Shephard (1998) and West and Harrison (1997). We want to estimate sequentially in time some characteristics of the posterior distribution 1:t /.Typically, we are interested in computing E.x t 1:t / (filtering), E.x t+L 1:t / (prediction) 1:t / (fixed lag smoothing), where L is a positive integer.These estimates do not in general admit analytical expressions and we must resort to numerical methods
A Generative Model for Music Transcription
, 2005
"... In this paper we present a graphical model for polyphonic music transcription. Our model, formulated as a Dynamical Bayesian Network, embodies a transparent and computationally tractable approach to this acoustic analysis problem. An advantage of our approach is that it places emphasis on explicitl ..."
Abstract

Cited by 68 (18 self)
 Add to MetaCart
(Show Context)
In this paper we present a graphical model for polyphonic music transcription. Our model, formulated as a Dynamical Bayesian Network, embodies a transparent and computationally tractable approach to this acoustic analysis problem. An advantage of our approach is that it places emphasis on explicitly modelling the sound generation procedure. It provides a clear framework in which both high level (cognitive) prior information on music structure can be coupled with low level (acoustic physical) information in a principled manner to perform the analysis. The model is a special case of the, generally intractable, switching Kalman filter model. Where possible, we derive, exact polynomial time inference procedures, and otherwise efficient approximations. We argue that our generative model based approach is computationally feasible for many music applications and is readily extensible to more general auditory scene analysis scenarios.
Particle Filters for State Space Models With the Presence of Static Parameters
, 2002
"... In this paper particle filters for dynamic state space models handling unknown static parameters are discussed. The approach is based on marginalizing the static parameters out of the posterior distribution such that only the state vector needs to be considered. Such a marginalization can always be ..."
Abstract

Cited by 63 (0 self)
 Add to MetaCart
In this paper particle filters for dynamic state space models handling unknown static parameters are discussed. The approach is based on marginalizing the static parameters out of the posterior distribution such that only the state vector needs to be considered. Such a marginalization can always be applied. However, realtime applications are only possible when the distribution of the unknown parameters given both observations and the hidden state vector depends on some lowdimensional sufficient statistics. Such sufficient statistics are present in many of the commonly used state space models. Marginalizing the static parameters avoids the problem of impoverishment which typically occur when static parameters are included as part of the state vector. The filters are tested on several different models, with promising results.
Bayesian Modelling of Inseparable SpaceTime Variation in Disease Risk
, 1998
"... This paper proposes a unified framework for the analysis of incidence or mortality data in space and time. The problem with such analysis is that the number of cases and the corresponding population at risk in any single unit of space \Theta time are too small to produce a reliable estimate of the u ..."
Abstract

Cited by 53 (2 self)
 Add to MetaCart
This paper proposes a unified framework for the analysis of incidence or mortality data in space and time. The problem with such analysis is that the number of cases and the corresponding population at risk in any single unit of space \Theta time are too small to produce a reliable estimate of the underlying disease risk without "borrowing strength" from neighbouring cells. The goal here could be described as one of smoothing, in which both spatial and nonspatial considerations may arise, and spatiotemporal interactions may become an important feature. Based on an extended version of the main effects model proposed in KnorrHeld and Besag (1998), four generic types of space \Theta time interactions are introduced. Each type implies a certain degree of prior (in)dependence for interaction parameters, and corresponds to the product of one of the two spatial main effects with one of the two temporal main effects. Data analysis is implemented via Markov chain Monte Carlo methods. The methodology is illustrated by an analysis of Ohio lung cancer data 196888. We compare the fit and the complexity of each model by the DIC criterion, recently proposed in Spiegelhalter et al. (1998).
Markov chain Monte Carlo for dynamic generalised linear models
, 1998
"... This paper presents a new methodological approach for carrying out Bayesian inference about dynamic models for exponential family observations. The approach is simulationbased and involves the use of Markov chain Monte Carlo techniques. A MetropolisHastings algorithm is combined with the Gibbs samp ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
This paper presents a new methodological approach for carrying out Bayesian inference about dynamic models for exponential family observations. The approach is simulationbased and involves the use of Markov chain Monte Carlo techniques. A MetropolisHastings algorithm is combined with the Gibbs sampler in repeated use of an adjusted version of normal dynamic linear models. Different alternative schemes based on sampling from the system disturbances and state parameters separately and in a block are derived and compared. The approach is fully Bayesian in obtaining posterior samples with state parameters and unknown hyperparameters. Illustrations with real datasets with sparse counts and missing values are presented. Extensions to accommodate more general evolution forms and distributions for observations and disturbances are outlined.
Improvement Strategies for Monte Carlo Particle Filters
 SEQUENTIAL MONTE CARLO METHODS IN PRACTICE
, 2000
"... ..."
Iterative Algorithms for State Estimation of Jump Markov Linear Systems
, 1999
"... Jump Markov linear systems (JMLSs) are linear systems whose parameters evolve with time according to a finite state Markov chain. Given a set of observations, our aim is to estimate the states of the finite state Markov chain and the continuous (in space) states of the linear system. ..."
Abstract

Cited by 37 (6 self)
 Add to MetaCart
Jump Markov linear systems (JMLSs) are linear systems whose parameters evolve with time according to a finite state Markov chain. Given a set of observations, our aim is to estimate the states of the finite state Markov chain and the continuous (in space) states of the linear system.