• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

Learning Bayesian network structure from massive datasets: the “sparse candidate” algorithm (1999)

Cached

  • Download as a PDF

Download Links

  • [www.eecis.udel.edu]
  • [www.cs.iastate.edu]
  • [www.cs.huji.ac.il]
  • [web.cs.iastate.edu]
  • [www.kddresearch.org]
  • [www.sysbio.harvard.edu]
  • [www.cs.huji.ac.il]
  • [www.cs.huji.ac.il]
  • [www.biostat.wisc.edu]
  • [robotics.stanford.edu]
  • [robotics.stanford.edu]
  • [robotics.stanford.edu]
  • [www-robotics.stanford.edu]
  • [ai.stanford.edu]
  • [www.biostat.wisc.edu]
  • [www1.cs.columbia.edu]
  • [www.c2b2.columbia.edu]
  • [www1.cs.columbia.edu]
  • [www.cs.huji.ac.il]
  • [www.cs.huji.ac.il]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Nir Friedman , Iftach Nachman
Venue:In Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence (UAI
Citations:246 - 7 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Friedman99learningbayesian,
    author = {Nir Friedman and Iftach Nachman},
    title = {Learning Bayesian network structure from massive datasets: the “sparse candidate” algorithm},
    booktitle = {In Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence (UAI},
    year = {1999},
    pages = {206--215}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

Learning Bayesian networks is often cast as an optimization problem, where the computational task is to find a structure that maximizes a sta-tistically motivated score. By and large, existing learning tools address this optimization problem using standard heuristic search techniques. Since the search space is extremely large, such search procedures can spend most of the time examining candidates that are extremely unreasonable. This problem becomes critical when we deal with data sets that are large either in the number of in-stances, or the number of attributes. In this paper, we introduce an algorithm that achieves faster learning by restricting the search space. This iterative algorithm restricts the par-ents of each variable to belong to a small sub-set of candidates. We then search for a network that satisfies these constraints. The learned net-work is then used for selecting better candidates for the next iteration. We evaluate this algorithm both on synthetic and real-life data. Our results show that it is significantly faster than alternative search procedures without loss of quality in the learned structures. 1

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University