• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

DMCA

Bayesian Learning via Stochastic Gradient Langevin Dynamics

Cached

  • Download as a PDF

Download Links

  • [www.cs.berkeley.edu]
  • [www.gatsby.ucl.ac.uk]
  • [www.eecs.berkeley.edu]
  • [www.icml-2011.org]
  • [people.ee.duke.edu]
  • [www.gatsby.ucl.ac.uk]
  • [people.ee.duke.edu]
  • [www.columbia.edu]
  • [www.ics.uci.edu]
  • [www.ics.uci.edu]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Max Welling , Yee Whye Teh
Citations:50 - 7 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@MISC{Welling_bayesianlearning,
    author = {Max Welling and Yee Whye Teh},
    title = {Bayesian Learning via Stochastic Gradient Langevin Dynamics},
    year = {}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior distribution as we anneal the stepsize. This seamless transition between optimization and Bayesian posterior sampling provides an inbuilt protection against overfitting. We also propose a practical method for Monte Carlo estimates of posterior statistics which monitors a “sampling threshold ” and collects samples after it has been surpassed. We apply the method to three models: a mixture of Gaussians, logistic regression and ICA with natural gradients. 1.

Keyphrases

stochastic gradient langevin dynamic    bayesian learning    right amount    monte carlo estimate    large scale datasets    sampling threshold    small mini-batches    practical method    seamless transition    collect sample    inbuilt protection    iterative learning    logistic regression    true posterior distribution    posterior statistic    new framework    standard stochastic gradient optimization    natural gradient   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University