• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Experiments with a New Boosting Algorithm (1996)

Cached

Download Links

  • [axon.cs.byu.edu]
  • [synapse.cs.byu.edu]
  • [axon.cs.byu.edu]
  • [axon.cs.byu.edu]
  • [synapse.cs.byu.edu]
  • [axon.cs.byu.edu]
  • [axon.cs.byu.edu]
  • [www2.boosting.org]
  • [www.cs.ucsd.edu]
  • [cseweb.ucsd.edu]
  • [www.cs.ucsd.edu]
  • [www.cis.upenn.edu]
  • [cseweb.ucsd.edu]
  • [www.cis.upenn.edu]
  • [astro.temple.edu]
  • [astro.temple.edu]
  • [ftp.ai.mit.edu]
  • [webcourse.cs.technion.ac.il]
  • [people.cs.pitt.edu]
  • [people.cs.pitt.edu]
  • [people.cs.pitt.edu]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Yoav Freund , Robert E. Schapire
Citations:2208 - 20 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@MISC{Freund96experimentswith,
    author = {Yoav Freund and Robert E. Schapire},
    title = {Experiments with a New Boosting Algorithm},
    year = {1996}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a “pseudo-loss ” which is a method for forcing a learning algorithm of multi-label conceptsto concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman’s “bagging ” method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem.

Keyphrases

new boosting algorithm    learning algorithm    machine-learning benchmark    related notion    decision tree    multi-label conceptsto concentrate    nearest-neighbor classifier    ocr problem    second set    various classifier    real learning problem    first set    single attribute-value test   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2016 The Pennsylvania State University