Results 1  10
of
1,800,567
Upal: Unbiased pool based active learning
 In AISTATS
, 2012
"... In this paper we address the problem of pool based active learning, and provide an algorithm, called UPAL, that works by minimizing the unbiased estimator of the risk of a hypothesis in a given hypothesis space. For the space of linear classifiers and the squared loss we show that UPAL is equivalent ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
In this paper we address the problem of pool based active learning, and provide an algorithm, called UPAL, that works by minimizing the unbiased estimator of the risk of a hypothesis in a given hypothesis space. For the space of linear classifiers and the squared loss we show that UPAL
PoolBased Active Learning for Text Classification
"... This paper shows how a text classifier’s need for labeled training documents can be reduced by employing a large pool of unlabeled documents. We modify the QuerybyCommittee (QBC) method of active learning to use the unlabeled pool by explicitly estimating document density when selecting examples f ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper shows how a text classifier’s need for labeled training documents can be reduced by employing a large pool of unlabeled documents. We modify the QuerybyCommittee (QBC) method of active learning to use the unlabeled pool by explicitly estimating document density when selecting examples
Employing EM in PoolBased Active Learning for Text Classification
, 1998
"... This paper shows how a text classifier's need for labeled training data can be reduced by a combination of active learning and Expectation Maximization (EM) on a pool of unlabeled data. QuerybyCommittee is used to actively select documents for labeling, then EM with a naive Bayes model furthe ..."
Abstract

Cited by 316 (10 self)
 Add to MetaCart
This paper shows how a text classifier's need for labeled training data can be reduced by a combination of active learning and Expectation Maximization (EM) on a pool of unlabeled data. QuerybyCommittee is used to actively select documents for labeling, then EM with a naive Bayes model
Nearoptimal Adaptive Poolbased Active Learning with General Loss
"... We consider adaptive poolbased active learning in a Bayesian setting. We first analyze two commonly used greedy active learning criteria: the maximum entropy criterion, which selects the example with the highest entropy, and the least confidence criterion, which selects the example whose most prob ..."
Abstract
 Add to MetaCart
We consider adaptive poolbased active learning in a Bayesian setting. We first analyze two commonly used greedy active learning criteria: the maximum entropy criterion, which selects the example with the highest entropy, and the least confidence criterion, which selects the example whose most
Machine Learning, vol.75, no.3, pp.249–274, 2009. 1 Poolbased Active Learning in Approximate Linear Regression
"... The goal of poolbased active learning is to choose the best input points to gather output values from a ‘pool ’ of input samples. We develop two poolbased active learning criteria for linear regression. The first criterion allows us to obtain a closedform solution so it is computationally very eff ..."
Abstract
 Add to MetaCart
The goal of poolbased active learning is to choose the best input points to gather output values from a ‘pool ’ of input samples. We develop two poolbased active learning criteria for linear regression. The first criterion allows us to obtain a closedform solution so it is computationally very
Supplementary Material: Nearoptimal Adaptive Poolbased Active Learning with General Loss
"... We will prove the theorem for the case when H contains probabilistic hypotheses. The proof can easily be transferred to the case where H is the labeling set by following the construction in (Cuong et al., 2013, sup.). LetH = {h1, h2,..., hn}with n probabilistic hypotheses, and assume a uniform prio ..."
Abstract
 Add to MetaCart
probabilistic hypotheses. Our aim is to trick the greedy algorithm pi to select these k instances. Since the hypotheses are identical on these instances, the greedy algorithm learns nothing when receiving each label.
Support Vector Machine Active Learning with Applications to Text Classification
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2001
"... Support vector machines have met with significant success in numerous realworld learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using poolbased acti ..."
Abstract

Cited by 729 (5 self)
 Add to MetaCart
Support vector machines have met with significant success in numerous realworld learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using poolbased
Active Learning Models and Noise
, 2007
"... I study active learning in general poolbased active learning models as well noisy active learning algorithms and then compare them for the class of linear separators under the uniform distribution. 1 ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
I study active learning in general poolbased active learning models as well noisy active learning algorithms and then compare them for the class of linear separators under the uniform distribution. 1
A bound on the label complexity of agnostic active learning
 In Proc. of the 24th international conference on Machine learning
, 2007
"... We study the label complexity of poolbased active learning in the agnostic PAC model. Specifically, we derive general bounds on the number of label requests made by the A 2 algorithm proposed by Balcan, Beygelzimer & Langford (Balcan et al., 2006). This represents the first nontrivial generalp ..."
Abstract

Cited by 97 (11 self)
 Add to MetaCart
We study the label complexity of poolbased active learning in the agnostic PAC model. Specifically, we derive general bounds on the number of label requests made by the A 2 algorithm proposed by Balcan, Beygelzimer & Langford (Balcan et al., 2006). This represents the first nontrivial general
Neural Networks, vol.36, pp.73–82, 2012. 1 Improving Importance Estimation in Poolbased Batch Active Learning for Approximate Linear Regression
"... Poolbased batch active learning is aimed at choosing training inputs from a ‘pool’ of test inputs so that the generalization error is minimized. PALICE (Poolbased Active Learning using Importanceweighted leastsquares learning based on Conditional Expectation of the generalization error) is a st ..."
Abstract
 Add to MetaCart
Poolbased batch active learning is aimed at choosing training inputs from a ‘pool’ of test inputs so that the generalization error is minimized. PALICE (Poolbased Active Learning using Importanceweighted leastsquares learning based on Conditional Expectation of the generalization error) is a
Results 1  10
of
1,800,567