Results 1  10
of
37
A tutorial on Bayesian nonparametric models.
 Journal of Mathematical Psychology,
, 2012
"... Abstract A key problem in statistical modeling is model selection, how to choose a model at an appropriate level of complexity. This problem appears in many settings, most prominently in choosing the number of clusters in mixture models or the number of factors in factor analysis. In this tutorial ..."
Abstract

Cited by 42 (9 self)
 Add to MetaCart
Abstract A key problem in statistical modeling is model selection, how to choose a model at an appropriate level of complexity. This problem appears in many settings, most prominently in choosing the number of clusters in mixture models or the number of factors in factor analysis. In this tutorial we describe Bayesian nonparametric methods, a class of methods that sidesteps this issue by allowing the data to determine the complexity of the model. This tutorial is a highlevel introduction to Bayesian nonparametric methods and contains several examples of their application.
The IBP Compound Dirichlet Process and its Application to Focused Topic Modeling
"... The hierarchical Dirichlet process (HDP) is a Bayesian nonparametric mixed membership model—each data point is modeled with a collection of components of different proportions. Though powerful, the HDP makes an assumption that the probability of a component being exhibited by a data point is positiv ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
The hierarchical Dirichlet process (HDP) is a Bayesian nonparametric mixed membership model—each data point is modeled with a collection of components of different proportions. Though powerful, the HDP makes an assumption that the probability of a component being exhibited by a data point is positively correlated with its proportion within that data point. This might be an undesirable assumption. For example, in topic modeling, a topic (component) might be rare throughout the corpus but dominant within those documents (data points) where it occurs. We develop the IBP compound Dirichlet process (ICD), a Bayesian nonparametric prior that decouples acrossdata prevalence and withindata proportion in a mixed membership model. The ICD combines properties from the HDP and the Indian buffet process (IBP), a Bayesian nonparametric prior on binary matrices. The ICD assigns a subset of the shared mixture components to each data point. This subset, the data point’s “focus”, is determined independently from the amount that each of its components contribute. We develop an ICD mixture model for text, the focused topic model (FTM), and show superior performance over the HDPbased topic model.
Combinatorial clustering and the beta negative binomial process
, 2015
"... We develop a Bayesian nonparametric approach to a general family of latent class problems in which individuals can belong simultaneously to multiple classes and where each class can be exhibited multiple times by an individual. We introduce a combinatorial stochastic process known as the negative b ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
We develop a Bayesian nonparametric approach to a general family of latent class problems in which individuals can belong simultaneously to multiple classes and where each class can be exhibited multiple times by an individual. We introduce a combinatorial stochastic process known as the negative binomial process (NBP) as an infinitedimensional prior appropriate for such problems. We show that the NBP is conjugate to the beta process, and we characterize the posterior distribution under the betanegative binomial process (BNBP) and hierarchical models based on the BNBP (the HBNBP). We study the asymptotic properties of the BNBP and develop a threeparameter extension of the BNBP that exhibits powerlaw behavior. We derive MCMC algorithms for posterior inference under the HBNBP, and we present experiments using these algorithms in the domains of image segmentation, object recognition, and document analysis.
Bayesian Nonparametric Methods for Learning Markov Switching Processes
 IEEE SIGNAL PROCESSING MAGAZINE SPECIAL ISSUE
, 2010
"... Markov switching processes, such as the hidden Markov model (HMM) and switching linear dynamical system (SLDS), are often used to describe rich dynamical phenomena. They describe complex behavior via repeated returns to a set of simpler models: imagine a person alternating between walking, running, ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Markov switching processes, such as the hidden Markov model (HMM) and switching linear dynamical system (SLDS), are often used to describe rich dynamical phenomena. They describe complex behavior via repeated returns to a set of simpler models: imagine a person alternating between walking, running, and jumping behaviors, or a stock index switching between regimes of high and low volatility. Classical approaches to identification and estimation of these models assume a fixed, prespecified number of dynamical models. We instead examine Bayesian nonparametric approaches that define a prior on Markov switching processes with an unbounded number of potential model parameters (i.e., Markov modes). By leveraging stochastic processes such as the beta and Dirichlet process, these methods allow the data to drive the complexity of the learned model, while still permitting efficient inference algorithms. They also lead to generalizations which discover and model dynamical behaviors shared among multiple related time series.
Learning and Generalization of Complex Tasks from Unstructured Demonstrations
"... Abstract — We present a novel method for segmenting demonstrations, recognizing repeated skills, and generalizing complex tasks from unstructured demonstrations. This method combines many of the advantages of recent automatic segmentation methods for learning from demonstration into a single princip ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
(Show Context)
Abstract — We present a novel method for segmenting demonstrations, recognizing repeated skills, and generalizing complex tasks from unstructured demonstrations. This method combines many of the advantages of recent automatic segmentation methods for learning from demonstration into a single principled, integrated framework. Specifically, we use the Beta Process Autoregressive Hidden Markov Model and Dynamic Movement Primitives to learn and generalize a multistep task on the PR2 mobile manipulator and to demonstrate the potential of our framework to learn a large library of skills over time. I.
Incremental Semantically Grounded Learning from Demonstration
"... Abstract—Much recent work in robot learning from demonstration has focused on automatically segmenting continuous task demonstrations into simpler, reusable primitives. However, strong assumptions are often made about how these primitives can be sequenced, limiting the potential for data reuse. We i ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Much recent work in robot learning from demonstration has focused on automatically segmenting continuous task demonstrations into simpler, reusable primitives. However, strong assumptions are often made about how these primitives can be sequenced, limiting the potential for data reuse. We introduce a novel method for discovering semantically grounded primitives and incrementally building and improving a finitestate representation of a task in which various contingencies can arise. Specifically, a Beta Process Autoregressive Hidden Markov Model is used to automatically segment demonstrations into motion categories, which are then further subdivided into semantically grounded states in a finitestate automaton. During replay of the task, a datadriven approach is used to collect additional data where they are most needed through interactive corrections, which are then used to improve the finitestate automaton. Together, this allows for intelligent sequencing of primitives to create novel, adaptive behavior that can be incrementally improved as needed. We demonstrate the utility of this technique on a furniture assembly task using the PR2 mobile manipulator. I.
StickBreaking Beta Processes and the Poisson Process
"... We show that the stickbreaking construction of the beta process due to Paisley et al. (2010) can be obtained from the characterization of the beta process as a Poisson process. Specifically, we show that the mean measure of the underlying Poisson process is equal to that of the beta process. We use ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
We show that the stickbreaking construction of the beta process due to Paisley et al. (2010) can be obtained from the characterization of the beta process as a Poisson process. Specifically, we show that the mean measure of the underlying Poisson process is equal to that of the beta process. We use this underlying representation to derive error bounds on truncated beta processes that are tighter than those in the literature. We also develop a new MCMC inference algorithm for beta processes, based in part on our new Poisson process construction. 1