Results 1  10
of
303
Convergence of Sequential Monte Carlo Methods
 SEQUENTIAL MONTE CARLO METHODS IN PRACTICE
, 2000
"... Bayesian estimation problems where the posterior distribution evolves over time through the accumulation of data arise in many applications in statistics and related fields. Recently, a large number of algorithms and applications based on sequential Monte Carlo methods (also known as particle filter ..."
Abstract

Cited by 243 (13 self)
 Add to MetaCart
Bayesian estimation problems where the posterior distribution evolves over time through the accumulation of data arise in many applications in statistics and related fields. Recently, a large number of algorithms and applications based on sequential Monte Carlo methods (also known as particle filtering methods) have appeared in the literature to solve this class of problems; see (Doucet, de Freitas & Gordon, 2001) for a survey. However, few of these methods have been proved to converge rigorously. The purpose of this paper is to address this issue. We present a general sequential Monte Carlo (SMC) method which includes most of the important features present in current SMC methods. This method generalizes and encompasses many recent algorithms. Under mild regularity conditions, we obtain rigorous convergence results for this general SMC method and therefore give theoretical backing for the validity of all the algorithms that can be obtained as particular cases of it.
Central limit theorem for sequential monte carlo methods and its application to bayesian inference
 Ann. Statist
"... “particle filters, ” refers to a general class of iterative algorithms that performs Monte Carlo approximations of a given sequence of distributions of interest (πt). We establish in this paper a central limit theorem for the Monte Carlo estimates produced by these computational methods. This result ..."
Abstract

Cited by 142 (4 self)
 Add to MetaCart
“particle filters, ” refers to a general class of iterative algorithms that performs Monte Carlo approximations of a given sequence of distributions of interest (πt). We establish in this paper a central limit theorem for the Monte Carlo estimates produced by these computational methods. This result holds under minimal assumptions on the distributions πt, and applies in a general framework which encompasses most of the sequential Monte Carlo methods that have been considered in the literature, including the resamplemove algorithm of Gilks and Berzuini [J. R. Stat. Soc. Ser. B Stat. Methodol. 63 (2001) 127–146] and the residual resampling scheme. The corresponding asymptotic variances provide a convenient measurement of the precision of a given particle filter. We study, in particular, in some typical examples of Bayesian applications, whether and at which rate these asymptotic variances diverge in time, in order to assess the long term reliability of the considered algorithm. 1. Introduction. Sequential Monte Carlo methods form an emerging
Evaluation methods for topic models
 In ICML
, 2009
"... A natural evaluation metric for statistical topic models is the probability of heldout documents given a trained model. While exact computation of this probability is intractable, several estimators for this probability have been used in the topic modeling literature, including the harmonic mean me ..."
Abstract

Cited by 111 (10 self)
 Add to MetaCart
(Show Context)
A natural evaluation metric for statistical topic models is the probability of heldout documents given a trained model. While exact computation of this probability is intractable, several estimators for this probability have been used in the topic modeling literature, including the harmonic mean method and empirical likelihood method. In this paper, we demonstrate experimentally that commonlyused methods are unlikely to accurately estimate the probability of heldout documents, and propose two alternative methods that are both accurate and efficient. 1.
Fast automatic heart chamber segmentation from 3D CT data using marginal space learning and steerable features
 In Proc. ICCV
"... Multichamber heart segmentation is a prerequisite for global quantification of the cardiac function. The complexity of cardiac anatomy, poor contrast, noise or motion artifacts makes this segmentation problem a challenging task. In this paper, we present an efficient, robust, and fully automatic se ..."
Abstract

Cited by 56 (23 self)
 Add to MetaCart
(Show Context)
Multichamber heart segmentation is a prerequisite for global quantification of the cardiac function. The complexity of cardiac anatomy, poor contrast, noise or motion artifacts makes this segmentation problem a challenging task. In this paper, we present an efficient, robust, and fully automatic segmentation method for 3D cardiac computed tomography (CT) volumes. Our approach is based on recent advances in learning discriminative object models and we exploit a large database of annotated CT volumes. We formulate the segmentation as a two step learning problem: anatomical structure localization and boundary delineation. A novel algorithm, Marginal Space Learning (MSL), is introduced to solve the 9dimensional similarity search problem for localizing the heart chambers. MSL reduces the number of testing hypotheses by about six orders of magnitude. We also propose to use steerable image features, which incorporate the orientation and scale information into the distribution of sampling points, thus avoiding the timeconsuming volume data rotation operations. After determining the similarity transformation of the heart chambers, we estimate the 3D shape through learningbased boundary delineation. Extensive experiments on multichamber heart segmentation demonstrate the efficiency and robustness of the proposed approach, comparing favorably to the stateoftheart. This is the first study reporting stable results on a large cardiac CT dataset with 323 volumes. In addition, we achieve a speed of less than eight seconds for automatic segmentation of all four chambers. 1.
Practical Filtering with Sequential Parameter Learning
, 2003
"... In this paper we develop a general simulationbased approach to filtering and sequential parameter learning. We begin with an algorithm for filtering in a general dynamic state space model and then extend this to incorporate sequential parameter learning. The key idea is to express the filtering ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
In this paper we develop a general simulationbased approach to filtering and sequential parameter learning. We begin with an algorithm for filtering in a general dynamic state space model and then extend this to incorporate sequential parameter learning. The key idea is to express the filtering distribution as a mixture of lagsmoothing distributions and to implement this sequentially. Our approach has a number of advantages over current methodologies. First, it allows for sequential parmeter learning where sequential importance sampling approaches have difficulties. Second
Efficient block sampling strategies for sequential Monte Carlo
 Journal of Computational and Graphical Statistics
, 2006
"... Sequential Monte Carlo (SMC) methods are a powerful set of simulationbased techniques for sampling sequentially from a sequence of complex probability distributions. These methods rely on a combination of importance sampling and resampling techniques. In a Markov chain Monte Carlo (MCMC) framework, ..."
Abstract

Cited by 39 (7 self)
 Add to MetaCart
Sequential Monte Carlo (SMC) methods are a powerful set of simulationbased techniques for sampling sequentially from a sequence of complex probability distributions. These methods rely on a combination of importance sampling and resampling techniques. In a Markov chain Monte Carlo (MCMC) framework, block sampling strategies often perform much better than algorithms based on oneatatime sampling strategies if “good ” proposal distributions to update blocks of variables can be designed. In an SMC framework, standard algorithms sequentially sample the variables one at a time whereas, like MCMC, the efficiency of algorithms could be improved significantly by using block sampling strategies. Unfortunately, a direct implementation of such strategies is impossible as it requires the knowledge of integrals which do not admit closedform expressions. This article introduces a new methodology which bypasses this problem and is a natural extension of standard SMC methods. Applications to several sequential Bayesian inference problems demonstrate these methods.
Reinforcement learning with limited reinforcement: Using Bayes risk for active learning in POMDPs
, 2008
"... Partially Observable Markov Decision Processes (POMDPs) have succeeded in planning domains that require balancing actions that increase an agent’s knowledge and actions that increase an agent’s reward. Unfortunately, most POMDPs are defined with a large number of parameters which are difficult to sp ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
(Show Context)
Partially Observable Markov Decision Processes (POMDPs) have succeeded in planning domains that require balancing actions that increase an agent’s knowledge and actions that increase an agent’s reward. Unfortunately, most POMDPs are defined with a large number of parameters which are difficult to specify only from domain knowledge. In this paper, we present an approximation approach that allows us to treat the POMDP model parameters as additional hidden state in a “modeluncertainty ” POMDP. Coupled with modeldirected queries, our planner actively learns good policies. We demonstrate our approach on several POMDP problems.
Supplement to “Time series analysis via mechanistic models”.
 Ann. Appl. Statist., Supporting
, 2008
"... The purpose of time series analysis via mechanistic models is to reconcile the known or hypothesized structure of a dynamical system with observations collected over time. We develop a framework for constructing nonlinear mechanistic models and carrying out inference. Our framework permits the cons ..."
Abstract

Cited by 36 (10 self)
 Add to MetaCart
The purpose of time series analysis via mechanistic models is to reconcile the known or hypothesized structure of a dynamical system with observations collected over time. We develop a framework for constructing nonlinear mechanistic models and carrying out inference. Our framework permits the consideration of implicit dynamic models, meaning statistical models for stochastic dynamical systems which are specified by a simulation algorithm to generate sample paths. Inference procedures that operate on implicit models are said to have the plugandplay property. Our work builds on recently developed plugandplay inference methodology for partially observed Markov models. We introduce a class of implicitly specified Markov chains with stochastic transition rates, and we demonstrate its applicability to open problems in statistical inference for biological systems. As one example, these models are shown to give a fresh perspective on measles transmission dynamics. As a second example, we present a mechanistic analysis of cholera incidence data, involving interaction between two competing strains of the pathogen Vibrio cholerae. 1. Introduction. A dynamical system is a process whose state varies with time. A mechanistic approach to understanding such a system is to write down equations, based on scientific understanding of the system, which describe how it evolves with time. Further equations describe the relationship of the state of the system to available observations on it. Mechanistic time series analysis concerns drawing inferences from the available data about the hypothesized equations