Results 1 
6 of
6
Bayesian inference with posterior regularization and applications to infinite latent svms
 In arXiv:1210.1766v2
, 2013
"... Existing Bayesian models, especially nonparametric Bayesian methods, rely on specially conceived priors to incorporate domain knowledge for discovering improved latent representations. While priors affect posterior distributions through Bayes ’ rule, imposing posterior regularization is arguably mo ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
(Show Context)
Existing Bayesian models, especially nonparametric Bayesian methods, rely on specially conceived priors to incorporate domain knowledge for discovering improved latent representations. While priors affect posterior distributions through Bayes ’ rule, imposing posterior regularization is arguably more direct and in some cases more natural and general. In this paper, we present regularized Bayesian inference (RegBayes), a novel computational framework that performs posterior inference with a regularization term on the desired postdata posterior distribution under an information theoretical formulation. RegBayes is more flexible than the procedure that elicits expert knowledge via priors, and it covers both directed Bayesian networks and undirected Markov networks. When the regularization is induced from a linear operator on the posterior distributions, such as the expectation operator, we present a general convexanalysis theorem to characterize the solution of RegBayes. Furthermore, we present two concrete examples of RegBayes, infinite latent support vector machines (iLSVM) and multitask infinite latent support vector machines (MTiLSVM), which explore the largemargin idea in combination with a nonparametric Bayesian model for dis
Online Bayesian passiveaggressive learning.
 In International Conference on Machine Learning (ICML),
, 2014
"... Abstract We present online Bayesian PassiveAggressive (BayesPA) learning, a generic online learning framework for hierarchical Bayesian models with maxmargin posterior regularization. We show that BayesPA subsumes the standard online PassiveAggressive (PA) learning and extends naturally to incor ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract We present online Bayesian PassiveAggressive (BayesPA) learning, a generic online learning framework for hierarchical Bayesian models with maxmargin posterior regularization. We show that BayesPA subsumes the standard online PassiveAggressive (PA) learning and extends naturally to incorporate latent variables for both parametric and nonparametric Bayesian inference, therefore providing great flexibility for explorative analysis. As an important example, we apply BayesPA to topic modeling and derive efficient online learning algorithms for maxmargin topic models. We further develop nonparametric BayesPA topic models to infer the unknown number of topics in an online manner. Experimental results on 20newsgroups and a large Wikipedia multilabel dataset (with 1.1 millions of training documents and 0.9 million of unique terms in the vocabulary) show that our approaches significantly improve time efficiency while achieving comparable accuracy with the corresponding batch algorithms.
Latent topic networks: A versatile probabilistic programming framework for topic models.”
 In International Conference on Machine Learning,
, 2015
"... Abstract Topic models have become increasingly prominent textanalytic machine learning tools for research in the social sciences and the humanities. In particular, custom topic models can be developed to answer specific research questions. The design of these models requires a nontrivial amount of ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract Topic models have become increasingly prominent textanalytic machine learning tools for research in the social sciences and the humanities. In particular, custom topic models can be developed to answer specific research questions. The design of these models requires a nontrivial amount of effort and expertise, motivating generalpurpose topic modeling frameworks. In this paper we introduce latent topic networks, a flexible class of richly structured topic models designed to facilitate applied research. Custom models can straightforwardly be developed in our framework with an intuitive firstorder logical probabilistic programming language. Latent topic networks admit scalable training via a parallelizable EM algorithm which leverages ADMM in the Mstep. We demonstrate the broad applicability of the models with case studies on modeling influence in citation networks, and U.S. Presidential State of the Union addresses.
Some Submodular DataPoisoning Attacks on Machine Learners
, 2015
"... We study datapoisoning attacks using a machine teaching framework. For a family of NPhard attack problems we pose them as submodular function maximization, thereby inheriting efficient greedy algorithms with theoretical guarantees. We demonstrate some attacks with experiments. 1 ..."
Abstract
 Add to MetaCart
(Show Context)
We study datapoisoning attacks using a machine teaching framework. For a family of NPhard attack problems we pose them as submodular function maximization, thereby inheriting efficient greedy algorithms with theoretical guarantees. We demonstrate some attacks with experiments. 1
Big Learning with Bayesian Methods
, 2007
"... Explosive growth in data and availability of cheap computing resources have sparked increasing interest in Big learning, an emerging subfield that studies scalable machine learning algorithms, systems, and applications with Big Data. Bayesian methods represent one important class of statistic metho ..."
Abstract
 Add to MetaCart
(Show Context)
Explosive growth in data and availability of cheap computing resources have sparked increasing interest in Big learning, an emerging subfield that studies scalable machine learning algorithms, systems, and applications with Big Data. Bayesian methods represent one important class of statistic methods for machine learning, with substantial recent developments on adaptive, flexible and scalable Bayesian learning. This article provides a survey of the recent advances in Big learning with Bayesian methods, termed Big Bayesian Learning, including nonparametric Bayesian methods for adaptively inferring model complexity, regularized Bayesian inference for improving the flexibility via posterior regularization, and scalable algorithms and systems based on stochastic subsampling and distributed computing for dealing with largescale applications.
RESEARCH ARTICLE Incorporating Linguistic Knowledge for Learning Distributed Word Representations
"... Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from largescale data in an unsupervised fashion, which, however, do not take rich linguistic kn ..."
Abstract
 Add to MetaCart
(Show Context)
Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from largescale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either linkbased knowledge or preferencebased knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from largescale text corpora into a unified word representation model, which will benefit many tasks in text mining.