• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

Anytime learning of decision trees

Cached

  • Download as a PDF

Download Links

  • [jmlr.csail.mit.edu]
  • [www.jmlr.org]
  • [www2.in.tu-clausthal.de]
  • [jmlr.org]
  • [www.cs.technion.ac.il]
  • [www.cs.technion.ac.il]
  • [www.cs.technion.ac.il]
  • [www2.in.tu-clausthal.de]
  • [www.cs.technion.ac.il]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Saher Esmeir , Shaul Markovitch , Claude Sammut
Venue:Journal of Machine Learning Research
Citations:14 - 3 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@ARTICLE{Esmeir_anytimelearning,
    author = {Saher Esmeir and Shaul Markovitch and Claude Sammut},
    title = {Anytime learning of decision trees},
    journal = {Journal of Machine Learning Research},
    year = {},
    volume = {8},
    pages = {2007}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

The majority of existing algorithms for learning decision trees are greedy—a tree is induced topdown, making locally optimal decisions at each node. In most cases, however, the constructed tree is not globally optimal. Even the few non-greedy learners cannot learn good trees when the concept is difficult. Furthermore, they require a fixed amount of time and are not able to generate a better tree if additional time is available. We introduce a framework for anytime induction of decision trees that overcomes these problems by trading computation speed for better tree quality. Our proposed family of algorithms employs a novel strategy for evaluating candidate splits. A biased sampling of the space of consistent trees rooted at an attribute is used to estimate the size of the minimal tree under that attribute, and an attribute with the smallest expected tree is selected. We present two types of anytime induction algorithms: a contract algorithm that determines the sample size on the basis of a pre-given allocation of time, and an interruptible algorithm that starts with a greedy tree and continuously improves subtrees by additional sampling. Experimental results indicate that, for several hard concepts, our proposed approach exhibits good anytime behavior and yields significantly better decision trees when more time is available.

Keyphrases

decision tree    anytime learning    computation speed    sample size    non-greedy learner    consistent tree    novel strategy    additional time    pre-given allocation    minimal tree    anytime induction algorithm    several hard concept    candidate split    optimal decision    fixed amount    constructed tree    good anytime behavior    expected tree    good tree    interruptible algorithm    tree quality    anytime induction    improves subtrees    greedy tree    biased sampling    additional sampling    experimental result   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University