Results 1 
5 of
5
Dynamic trees for learning and design
, 2009
"... ABSTRACT: Dynamic regression trees are an attractive option for automatic regression and classification with complicated response surfaces in online application settings. We create a sequential tree model whose state changes in time with the accumulation of new data, and provide particle learning a ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
(Show Context)
ABSTRACT: Dynamic regression trees are an attractive option for automatic regression and classification with complicated response surfaces in online application settings. We create a sequential tree model whose state changes in time with the accumulation of new data, and provide particle learning algorithms that allow for the efficient online posterior filtering of treestates. A major advantage of tree regression is that it allows for the use of very simple models within each partition. We consider both constant and linear mean functions at the tree leaves, along with multinomial leaves for classification problems, and propose default prior specifications that allow for prediction to be integrated over all model parameters conditional on a given tree. Inference is illustrated in some standard nonparametric regression examples, as well as in the setting of sequential experiment design, including both active learning and optimization applications, and in online classification. We detail implementation guidelines and problem specific methodology for each of these motivating applications. Throughout, it is demonstrated that our practical approach is able to provide better results compared to commonly used methods at a fraction of the cost.
Autoregressive mixture models for dynamic spatial Poisson processes: Application to tracking intensity of violent crime
 Journal of the American Statistical Association
, 2010
"... This article develops a set of tools for smoothing and prediction with dependent point event patterns. The methodology is motivated by the problem of tracking weekly maps of violent crime events, but is designed to be straightforward to adapt to a wide variety of alternative settings. In particular, ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
This article develops a set of tools for smoothing and prediction with dependent point event patterns. The methodology is motivated by the problem of tracking weekly maps of violent crime events, but is designed to be straightforward to adapt to a wide variety of alternative settings. In particular, a Bayesian semiparametric framework is introduced for modeling correlated time series of marked spatial Poisson processes. The likelihood is factored into two independent components: the set of total integrated intensities and a series of process densities. For the former it is assumed that Poisson intensities are realizations from a dynamic linear model. In the latter case, a novel class of dependent stickbreaking mixture models are proposed to allow nonparametric density estimates to evolve in discrete time. This, a simple and flexible new model for dependent random distributions, is based on autoregressive time series of marginally beta random variables applied as correlated stickbreaking proportions. The approach allows for marginal Dirichlet process priors at each time and adds only a single new correlation term to the static model specification. Sequential Monte Carlo algorithms are described for online inference with each model component, and marginal likelihood calculations form the basis for inference about parameters governing temporal dynamics. Simulated examples are provided to illustrate the methodology, and we close with results for the motivating application of tracking violent crime in Cincinnati. M. A. Taddy is Assistant Professor of Econometrics and Statistics and Robert L. Graves Faculty
Dynamic clustering via asymptotics of the dependent dirichlet process
 In Advances in Neural Information Processing Systems (NIPS
, 2013
"... Abstract This paper presents a novel algorithm, based upon the dependent Dirichlet process mixture model (DDPMM), for clustering batchsequential data containing an unknown number of evolving clusters. The algorithm is derived via a lowvariance asymptotic analysis of the Gibbs sampling algorithm fo ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract This paper presents a novel algorithm, based upon the dependent Dirichlet process mixture model (DDPMM), for clustering batchsequential data containing an unknown number of evolving clusters. The algorithm is derived via a lowvariance asymptotic analysis of the Gibbs sampling algorithm for the DDPMM, and provides a hard clustering with convergence guarantees similar to those of the kmeans algorithm. Empirical results from a synthetic test with moving Gaussian clusters and a test with real ADSB aircraft trajectory data demonstrate that the algorithm requires orders of magnitude less computational time than contemporary probabilistic and hard clustering algorithms, while providing higher accuracy on the examined datasets.
On Particle Learning ∗
, 1006
"... This document is the aggregation of several discussions of Lopes et al. (2010) we submitted to the proceedings of the Ninth Valencia Meeting, held in Benidorm, Spain, on June 3–8, 2010, in conjunction with Hedibert Lopes ’ talk at this meeting. The main point in those discussions is the potential fo ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This document is the aggregation of several discussions of Lopes et al. (2010) we submitted to the proceedings of the Ninth Valencia Meeting, held in Benidorm, Spain, on June 3–8, 2010, in conjunction with Hedibert Lopes ’ talk at this meeting. The main point in those discussions is the potential for degeneracy in the particle learning methodology, related with the exponential forgetting of the past simulations. We illustrate the resulting difficulties in the case of mixtures.
Augmentation Schemes for Particle MCMC
"... Particle MCMC involves using a particle filter within an MCMC algorithm. For inference of a model which involves an unobserved stochastic process, the standard implementation uses the particle filter to propose new values for the stochastic process, and MCMC moves to propose new values for the param ..."
Abstract
 Add to MetaCart
(Show Context)
Particle MCMC involves using a particle filter within an MCMC algorithm. For inference of a model which involves an unobserved stochastic process, the standard implementation uses the particle filter to propose new values for the stochastic process, and MCMC moves to propose new values for the parameters. We show how particle MCMC can be generalised beyond this. Our key idea is to introduce new latent variables. We then use the MCMC moves to update the latent variables, and the particle filter to propose new values for the parameters and stochastic process given the latent variables. A generic way of defining these latent variables is to model them as pseudoobservations of the parameters or of the stochastic process. By choosing the amount of information these latent variables have about the parameters and the stochastic process we can often improve the mixing of the particle MCMC algorithm by trading off the Monte Carlo error of the particle filter and the mixing of the MCMC moves. We show that using pseudoobservations within particle MCMC can improve its efficiency in certain scenarios: dealing with initialisation problems of the particle filter; speeding up the mixing of particle Gibbs when there is strong dependence between the parameters and the stochastic process; and enabling further MCMC steps to be used within the particle filter.