Results 1  10
of
43
On Sequential Monte Carlo Sampling Methods for Bayesian Filtering
 STATISTICS AND COMPUTING
, 2000
"... In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and nonGaussian. A general importance sampling framework is develop ..."
Abstract

Cited by 1051 (76 self)
 Add to MetaCart
(Show Context)
In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and nonGaussian. A general importance sampling framework is developed that unifies many of the methods which have been proposed over the last few decades in several different scientific disciplines. Novel extensions to the existing methods are also proposed. We show in particular how to incorporate local linearisation methods similar to those which have previously been employed in the deterministic filtering literature; these lead to very effective importance distributions. Furthermore we describe a method which uses RaoBlackwellisation in order to take advantage of the analytic structure present in some important classes of statespace models. In a final section we develop algorithms for prediction, smoothing and evaluation of the likelihood in dynamic models.
On sequential simulationbased methods for bayesian filtering
, 1998
"... Abstract. In this report, we present an overview of sequential simulationbased methods for Bayesian filtering of nonlinear and nonGaussian dynamic models. It includes in a general framework numerous methods proposed independently in various areas of science and proposes some original developments. ..."
Abstract

Cited by 251 (12 self)
 Add to MetaCart
(Show Context)
Abstract. In this report, we present an overview of sequential simulationbased methods for Bayesian filtering of nonlinear and nonGaussian dynamic models. It includes in a general framework numerous methods proposed independently in various areas of science and proposes some original developments.
Convergence of Sequential Monte Carlo Methods
 SEQUENTIAL MONTE CARLO METHODS IN PRACTICE
, 2000
"... Bayesian estimation problems where the posterior distribution evolves over time through the accumulation of data arise in many applications in statistics and related fields. Recently, a large number of algorithms and applications based on sequential Monte Carlo methods (also known as particle filter ..."
Abstract

Cited by 243 (13 self)
 Add to MetaCart
(Show Context)
Bayesian estimation problems where the posterior distribution evolves over time through the accumulation of data arise in many applications in statistics and related fields. Recently, a large number of algorithms and applications based on sequential Monte Carlo methods (also known as particle filtering methods) have appeared in the literature to solve this class of problems; see (Doucet, de Freitas & Gordon, 2001) for a survey. However, few of these methods have been proved to converge rigorously. The purpose of this paper is to address this issue. We present a general sequential Monte Carlo (SMC) method which includes most of the important features present in current SMC methods. This method generalizes and encompasses many recent algorithms. Under mild regularity conditions, we obtain rigorous convergence results for this general SMC method and therefore give theoretical backing for the validity of all the algorithms that can be obtained as particular cases of it.
Particle Filters for State Estimation of Jump Markov Linear Systems
, 2001
"... Jump Markov linear systems (JMLS) are linear systems whose parameters evolve with time according to a finite state Markov chain. In this paper, our aim is to recursively compute optimal state estimates for this class of systems. We present efficient simulationbased algorithms called particle filter ..."
Abstract

Cited by 177 (15 self)
 Add to MetaCart
(Show Context)
Jump Markov linear systems (JMLS) are linear systems whose parameters evolve with time according to a finite state Markov chain. In this paper, our aim is to recursively compute optimal state estimates for this class of systems. We present efficient simulationbased algorithms called particle filters to solve the optimal filtering problem as well as the optimal fixedlag smoothing problem. Our algorithms combine sequential importance sampling, a selection scheme, and Markov chain Monte Carlo methods. They use several variance reduction methods to make the most of the statistical structure of JMLS. Computer
Parameter Estimation in General StateSpace Models using Particle Methods
 Annals of the Institute of Statistical Mathematics
, 2003
"... Particle filtering techniques are a set of powerful and versatile simulationbased methods to perform optimal state estimation in nonlinear nonGaussian statespace models. If the model includes fixed parameters, a standard technique to perform parameter estimation consists of extending the state wi ..."
Abstract

Cited by 62 (10 self)
 Add to MetaCart
(Show Context)
Particle filtering techniques are a set of powerful and versatile simulationbased methods to perform optimal state estimation in nonlinear nonGaussian statespace models. If the model includes fixed parameters, a standard technique to perform parameter estimation consists of extending the state with the parameter to transform the problem into an optimal filtering problem. However, this approach requires the use of special particle filtering techniques which su#er from several drawbacks. We consider here an alternative approach combining particle filtering and gradient algorithms to perform batch and recursive maximum likelihood parameter estimation. An original particle method is presented to implement these approaches and their # corresponding author.
Maximum A Posteriori Sequence Estimation Using Monte Carlo Particle Filters
, 2001
"... We develop methods for performing maximum a posteriori (MAP) se quence estimation in nonlinear nonGaussian dynamic models. The methods rely on a particle cloud representation of the filtering distribution which evolves through time using importance sampling and resampling ideas. MAP sequence e ..."
Abstract

Cited by 49 (4 self)
 Add to MetaCart
We develop methods for performing maximum a posteriori (MAP) se quence estimation in nonlinear nonGaussian dynamic models. The methods rely on a particle cloud representation of the filtering distribution which evolves through time using importance sampling and resampling ideas. MAP sequence estimation is then performed using a classical dynamic programming technique applied to the discretised version of the state space. In contrast with standard approaches to the problem which essentially compare only the trajectories generated directly during the filtering stage, our method efficiently computes the optimal trajectory over all combinations of the filtered states. A particular strength of the method is that MAP sequence estimation is performed sequentially in one single forwards pass through the data without the requirement of an additional backward sweep. An application to estimation of a nonlinear time series model and to spectral estimation for timevarying autoregressions is described.
Uncertainty assessment of hydrologic model states and parameters: Sequential data assimilation using the particle filter
 Water Resour. Res. 2005
"... Let us know how access to this document benefits you. ..."
Abstract

Cited by 47 (5 self)
 Add to MetaCart
(Show Context)
Let us know how access to this document benefits you.
Sequential Monte Carlo Inference of Internal Delays in Nonstationary Communication Networks
, 2001
"... Online, spatially localized information about internal network performance can greatly assist dynamic routing algorithms and traffic transmission protocols. However, it is impractical to measure network traffic at all points in the network. A promising alternative is to measure only at the edge ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
Online, spatially localized information about internal network performance can greatly assist dynamic routing algorithms and traffic transmission protocols. However, it is impractical to measure network traffic at all points in the network. A promising alternative is to measure only at the edge of the network and infer internal behavior from these measurements. In this paper we concentrate on the estimation and localization of internal delays based on endtoend delay measurements from a source to receivers. We propose a sequential Monte Carlo (SMC) procedure capable of tracking nonstationary network behavior and estimating timevarying, internal delay characteristics. Simulation experiments demonstrate the performance of the SMC approach. 1 Introduction In largescale networks, endsystems cannot rely on the network itself to cooperate in characterizing its own behavior. This has prompted several groups to investigate methods for inferring internal network behavior based on...
Convolutional particle filter for parameter estimation in general statespace models
, 2009
"... apport de recherche ISSN 02496399Convolution particle filters for parameter estimation in general statespace models ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
apport de recherche ISSN 02496399Convolution particle filters for parameter estimation in general statespace models
Online ExpectationMaximization Type Algorithms For Parameter Estimation In General State Space Models
 Proc. IEEE Conf. ICASSP
, 2003
"... In this paper we present new online algorithms to estimate static parameters in nonlinear non Gaussian state space models. These algorithms rely on online ExpectationMaximization (EM) type algorithms. Contrary to standard Sequential Monte Carlo (SMC) methods recently proposed in the literature, the ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
(Show Context)
In this paper we present new online algorithms to estimate static parameters in nonlinear non Gaussian state space models. These algorithms rely on online ExpectationMaximization (EM) type algorithms. Contrary to standard Sequential Monte Carlo (SMC) methods recently proposed in the literature, these algorithms do not degenerate over time.