Results 1  10
of
1,928,976
The Nature of Statistical Learning Theory
, 1999
"... Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on the deve ..."
Abstract

Cited by 12976 (32 self)
 Add to MetaCart
Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based
Experiments with a New Boosting Algorithm
, 1996
"... In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the relate ..."
Abstract

Cited by 2176 (21 self)
 Add to MetaCart
In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced
Experiments with a New Boosting Algorithm
"... Abstract. In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced ..."
Abstract
 Add to MetaCart
Abstract. In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced
Experiments with a New Boosting Algorithm
"... Abstract. In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced ..."
Abstract
 Add to MetaCart
Abstract. In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced
A New Boosting Algorithm for Classification on Distributed Databases
"... In this paper, we propose a new boosting algorithm for distributed databases. The main idea of the proposed method is to utilize the parallelism of the distributed databases to build an ensemble of classifiers. At each round of the algorithm, each site processes its own data locally, and calculates ..."
Abstract
 Add to MetaCart
In this paper, we propose a new boosting algorithm for distributed databases. The main idea of the proposed method is to utilize the parallelism of the distributed databases to build an ensemble of classifiers. At each round of the algorithm, each site processes its own data locally, and calculates
L.: The value of agreement a new boosting algorithm
 J. Comput. Syst. Sci
"... Abstract. We present a new generalization bound where the use of unlabeled examples results in a better ratio between trainingset size and the resulting classifier’s quality and thus reduce the number of labeled examples necessary for achieving it. This is achieved by demanding from the algorithms ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
Abstract. We present a new generalization bound where the use of unlabeled examples results in a better ratio between trainingset size and the resulting classifier’s quality and thus reduce the number of labeled examples necessary for achieving it. This is achieved by demanding from the algorithms
An Efficient Boosting Algorithm for Combining Preferences
, 1999
"... The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting ..."
Abstract

Cited by 707 (18 self)
 Add to MetaCart
The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new
Published In A New Boosting Algorithm Using InputDependent Regularizer
, 2003
"... AdaBoost has proved to be an effective method to improve the performance of base classifiers both theoretically and empirically. However, previous studies have shown that AdaBoost might suffer from the overfitting problem, especially for noisy data. In addition, most current work on boosting assum ..."
Abstract
 Add to MetaCart
sumes that the combination weights are fixed constants and therefore does not take particular input patterns into consideration. In this paper, we present a new boosting algorithm, “WeightBoost”, which tries to solve these two problems by introducing an inputdependent regularization factor to the combination weight
Experiments with a New Boosting Algorithm DRAFT — PLEASE DO NOT DISTRIBUTE
, 1996
"... In an earlier paper [9], we introduced a new “boosting ” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the r ..."
Abstract
 Add to MetaCart
In an earlier paper [9], we introduced a new “boosting ” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced
Improved Boosting Algorithms Using Confidencerated Predictions
 MACHINE LEARNING
, 1999
"... We describe several improvements to Freund and Schapire’s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a simplified analysis of AdaBoost in this setting, and we show how this analysis can be used to find impr ..."
Abstract

Cited by 933 (26 self)
 Add to MetaCart
out to be identical to one proposed by Kearns and Mansour. We focus next on how to apply the new boosting algorithms to multiclass classification problems, particularly to the multilabel case in which each example may belong to more than one class. We give two boosting methods for this problem, plus
Results 1  10
of
1,928,976