Results 1  10
of
306
Central limit theorem for sequential monte carlo methods and its application to bayesian inference
 Ann. Statist
"... “particle filters, ” refers to a general class of iterative algorithms that performs Monte Carlo approximations of a given sequence of distributions of interest (πt). We establish in this paper a central limit theorem for the Monte Carlo estimates produced by these computational methods. This result ..."
Abstract

Cited by 146 (4 self)
 Add to MetaCart
“particle filters, ” refers to a general class of iterative algorithms that performs Monte Carlo approximations of a given sequence of distributions of interest (πt). We establish in this paper a central limit theorem for the Monte Carlo estimates produced by these computational methods. This result holds under minimal assumptions on the distributions πt, and applies in a general framework which encompasses most of the sequential Monte Carlo methods that have been considered in the literature, including the resamplemove algorithm of Gilks and Berzuini [J. R. Stat. Soc. Ser. B Stat. Methodol. 63 (2001) 127–146] and the residual resampling scheme. The corresponding asymptotic variances provide a convenient measurement of the precision of a given particle filter. We study, in particular, in some typical examples of Bayesian applications, whether and at which rate these asymptotic variances diverge in time, in order to assess the long term reliability of the considered algorithm. 1. Introduction. Sequential Monte Carlo methods form an emerging
Evaluation methods for topic models
 In ICML
, 2009
"... A natural evaluation metric for statistical topic models is the probability of heldout documents given a trained model. While exact computation of this probability is intractable, several estimators for this probability have been used in the topic modeling literature, including the harmonic mean me ..."
Abstract

Cited by 101 (10 self)
 Add to MetaCart
(Show Context)
A natural evaluation metric for statistical topic models is the probability of heldout documents given a trained model. While exact computation of this probability is intractable, several estimators for this probability have been used in the topic modeling literature, including the harmonic mean method and empirical likelihood method. In this paper, we demonstrate experimentally that commonlyused methods are unlikely to accurately estimate the probability of heldout documents, and propose two alternative methods that are both accurate and efficient. 1.
Fast automatic heart chamber segmentation from 3D CT data using marginal space learning and steerable features
 In Proc. ICCV
"... Multichamber heart segmentation is a prerequisite for global quantification of the cardiac function. The complexity of cardiac anatomy, poor contrast, noise or motion artifacts makes this segmentation problem a challenging task. In this paper, we present an efficient, robust, and fully automatic se ..."
Abstract

Cited by 56 (23 self)
 Add to MetaCart
(Show Context)
Multichamber heart segmentation is a prerequisite for global quantification of the cardiac function. The complexity of cardiac anatomy, poor contrast, noise or motion artifacts makes this segmentation problem a challenging task. In this paper, we present an efficient, robust, and fully automatic segmentation method for 3D cardiac computed tomography (CT) volumes. Our approach is based on recent advances in learning discriminative object models and we exploit a large database of annotated CT volumes. We formulate the segmentation as a two step learning problem: anatomical structure localization and boundary delineation. A novel algorithm, Marginal Space Learning (MSL), is introduced to solve the 9dimensional similarity search problem for localizing the heart chambers. MSL reduces the number of testing hypotheses by about six orders of magnitude. We also propose to use steerable image features, which incorporate the orientation and scale information into the distribution of sampling points, thus avoiding the timeconsuming volume data rotation operations. After determining the similarity transformation of the heart chambers, we estimate the 3D shape through learningbased boundary delineation. Extensive experiments on multichamber heart segmentation demonstrate the efficiency and robustness of the proposed approach, comparing favorably to the stateoftheart. This is the first study reporting stable results on a large cardiac CT dataset with 323 volumes. In addition, we achieve a speed of less than eight seconds for automatic segmentation of all four chambers. 1.
Practical Filtering with Sequential Parameter Learning
, 2003
"... In this paper we develop a general simulationbased approach to filtering and sequential parameter learning. We begin with an algorithm for filtering in a general dynamic state space model and then extend this to incorporate sequential parameter learning. The key idea is to express the filtering ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
In this paper we develop a general simulationbased approach to filtering and sequential parameter learning. We begin with an algorithm for filtering in a general dynamic state space model and then extend this to incorporate sequential parameter learning. The key idea is to express the filtering distribution as a mixture of lagsmoothing distributions and to implement this sequentially. Our approach has a number of advantages over current methodologies. First, it allows for sequential parmeter learning where sequential importance sampling approaches have difficulties. Second
Reinforcement learning with limited reinforcement: Using bayes risk for active learning in pomdps. ISAIM (online proceedings
, 2008
"... Partially Observable Markov Decision Processes (POMDPs) have succeeded in planning domains that require balancing actions that increase an agent’s knowledge and actions that increase an agent’s reward. Unfortunately, most POMDPs are defined with a large number of parameters which are difficult to sp ..."
Abstract

Cited by 37 (8 self)
 Add to MetaCart
(Show Context)
Partially Observable Markov Decision Processes (POMDPs) have succeeded in planning domains that require balancing actions that increase an agent’s knowledge and actions that increase an agent’s reward. Unfortunately, most POMDPs are defined with a large number of parameters which are difficult to specify only from domain knowledge. In this paper, we present an approximation approach that allows us to treat the POMDP model parameters as additional hidden state in a “modeluncertainty ” POMDP. Coupled with modeldirected queries, our planner actively learns good policies. We demonstrate our approach on several POMDP problems. 1.
Efficient block sampling strategies for sequential Monte Carlo
 Journal of Computational and Graphical Statistics
, 2006
"... Sequential Monte Carlo (SMC) methods are a powerful set of simulationbased techniques for sampling sequentially from a sequence of complex probability distributions. These methods rely on a combination of importance sampling and resampling techniques. In a Markov chain Monte Carlo (MCMC) framework, ..."
Abstract

Cited by 37 (7 self)
 Add to MetaCart
Sequential Monte Carlo (SMC) methods are a powerful set of simulationbased techniques for sampling sequentially from a sequence of complex probability distributions. These methods rely on a combination of importance sampling and resampling techniques. In a Markov chain Monte Carlo (MCMC) framework, block sampling strategies often perform much better than algorithms based on oneatatime sampling strategies if “good ” proposal distributions to update blocks of variables can be designed. In an SMC framework, standard algorithms sequentially sample the variables one at a time whereas, like MCMC, the efficiency of algorithms could be improved significantly by using block sampling strategies. Unfortunately, a direct implementation of such strategies is impossible as it requires the knowledge of integrals which do not admit closedform expressions. This article introduces a new methodology which bypasses this problem and is a natural extension of standard SMC methods. Applications to several sequential Bayesian inference problems demonstrate these methods.
Time series analysis via mechanistic models. In review; prepublished at arxiv.org/abs/0802.0021
, 2008
"... The purpose of time series analysis via mechanistic models is to reconcile the known or hypothesized structure of a dynamical system with observations collected over time. We develop a framework for constructing nonlinear mechanistic models and carrying out inference. Our framework permits the consi ..."
Abstract

Cited by 36 (10 self)
 Add to MetaCart
The purpose of time series analysis via mechanistic models is to reconcile the known or hypothesized structure of a dynamical system with observations collected over time. We develop a framework for constructing nonlinear mechanistic models and carrying out inference. Our framework permits the consideration of implicit dynamic models, meaning statistical models for stochastic dynamical systems which are specified by a simulation algorithm to generate sample paths. Inference procedures that operate on implicit models are said to have the plugandplay property. Our work builds on recently developed plugandplay inference methodology for partially observed Markov models. We introduce a class of implicitly specified Markov chains with stochastic transition rates, and we demonstrate its applicability to open problems in statistical inference for biological systems. As one example, these models are shown to give a fresh perspective on measles transmission dynamics. As a second example, we present a mechanistic analysis of cholera incidence data, involving interaction between two competing strains of the pathogen Vibrio cholerae. 1. Introduction. A
A survey of sequential Monte Carlo methods for economics and finance
, 2009
"... This paper serves as an introduction and survey for economists to the field of sequential Monte Carlo methods which are also known as particle filters. Sequential Monte Carlo methods are simulation based algorithms used to compute the highdimensional and/or complex integrals that arise regularly in ..."
Abstract

Cited by 34 (7 self)
 Add to MetaCart
This paper serves as an introduction and survey for economists to the field of sequential Monte Carlo methods which are also known as particle filters. Sequential Monte Carlo methods are simulation based algorithms used to compute the highdimensional and/or complex integrals that arise regularly in applied work. These methods are becoming increasingly popular in economics and finance; from dynamic stochastic general equilibrium models in macroeconomics to option pricing. The objective of this paper is to explain the basics of the methodology, provide references to the literature, and cover some of the theoretical results that justify the methods in practice.
Computational Methods for Complex Stochastic Systems: A Review of Some Alternatives to MCMC
"... We consider analysis of complex stochastic models based upon partial information. MCMC and reversible jump MCMC are often the methods of choice for such problems, but in some situations they can be difficult to implement; and suffer from problems such as poor mixing, and the difficulty of diagnosing ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
(Show Context)
We consider analysis of complex stochastic models based upon partial information. MCMC and reversible jump MCMC are often the methods of choice for such problems, but in some situations they can be difficult to implement; and suffer from problems such as poor mixing, and the difficulty of diagnosing convergence. Here we review three alternatives to MCMC methods: importance sampling, the forwardbackward algorithm, and sequential Monte Carlo (SMC). We discuss how to design good proposal densities for importance sampling, show some of the range of models for which the forwardbackward algorithm can be applied, and show how resampling ideas from SMC can be used to improve the efficiency of the other two methods. We demonstrate these methods on a range of examples, including estimating the transition density of a diffusion and of a discretestate continuoustime Markov chain; inferring structure in population genetics; and segmenting genetic divergence data.