Results 1  10
of
141,265
Understanding Probabilistic Classifiers
, 2001
"... . Probabilistic classifiers are developed by assuming generative models which are product distributions over the original attribute space (as in naive Bayes) or more involved spaces (as in general Bayesian networks). While this paradigm has been shown experimentally successful on real world appli ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
. Probabilistic classifiers are developed by assuming generative models which are product distributions over the original attribute space (as in naive Bayes) or more involved spaces (as in general Bayesian networks). While this paradigm has been shown experimentally successful on real world
Probabilistic Classifiers and the Concepts they Recognize
 In Proceedings of the Twentieth International Conference on Machine Learning, Menlo Park
, 2003
"... We investigate algebraic, logical, and geometric properties of concepts recognized by various classes of probabilistic classifiers. For this we introduce a natural hierarchy of probabilistic classifiers, the lowest level of which comprises the naive Bayesian classifiers. We show that the expre ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
We investigate algebraic, logical, and geometric properties of concepts recognized by various classes of probabilistic classifiers. For this we introduce a natural hierarchy of probabilistic classifiers, the lowest level of which comprises the naive Bayesian classifiers. We show
Sparse probabilistic classifiers
 In Ghahramani [2007
"... The scores returned by support vector machines are often used as a confidence measures in the classification of new examples. However, there is no theoretical argument sustaining this practice. Thus, when classification uncertainty has to be assessed, it is safer to resort to classifiers estimating ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The scores returned by support vector machines are often used as a confidence measures in the classification of new examples. However, there is no theoretical argument sustaining this practice. Thus, when classification uncertainty has to be assessed, it is safer to resort to classifiers estimating
CommitteeBased Sampling For Training Probabilistic Classifiers
 In Proceedings of the Twelfth International Conference on Machine Learning
, 1995
"... In many realworld learning tasks, it is expensive to acquire a sufficient number of labeled examples for training. This paper proposes a general method for efficiently training probabilistic classifiers, by selecting for training only the more informative examples in a stream of unlabeled examples. ..."
Abstract

Cited by 144 (3 self)
 Add to MetaCart
In many realworld learning tasks, it is expensive to acquire a sufficient number of labeled examples for training. This paper proposes a general method for efficiently training probabilistic classifiers, by selecting for training only the more informative examples in a stream of unlabeled examples
Probabilistic Classifiers for Tracking Point of View
 In Working
"... This paper describes work in developing probabilistic classifiers for a discourse segmentation problem that involves segmentation, reference resolution, and belief. Specifically, the problem is to segment a text into blocks such that all subjective sentences in a block are from the point of vie ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper describes work in developing probabilistic classifiers for a discourse segmentation problem that involves segmentation, reference resolution, and belief. Specifically, the problem is to segment a text into blocks such that all subjective sentences in a block are from the point
Collocational Properties in Probabilistic Classifiers for Discourse Categorization
"... Properties can be mapped to features in a machine learning algorithm in different ways, potentially yielding different results. In previous work, we experimented with various approaches to organizing collocational properties into features in a probabilistic classifier. It was found that one typ ..."
Abstract
 Add to MetaCart
Properties can be mapped to features in a machine learning algorithm in different ways, potentially yielding different results. In previous work, we experimented with various approaches to organizing collocational properties into features in a probabilistic classifier. It was found that one
CommitteeBased Sample Selection For Probabilistic Classifiers
 Journal of Artificial Intelligence Research
, 1999
"... In many realworld learning tasks it is expensive to acquire a sufficient number of labeled examples for training. This paper investigates methods for reducing annotation cost by sample selection. In this approach, during training the learning program examines many unlabeled examples and selects for ..."
Abstract

Cited by 65 (0 self)
 Add to MetaCart
for labeling only those that are most informative at each stage. This avoids redundantly labeling examples that contribute little new information. Our work follows on previous research on Query By Committee, and extends the committeebased paradigm to the context of probabilistic classification. We describe a
1 Improving the Accuracy of LeastSquares Probabilistic Classifiers
"... The leastsquares probabilistic classifier (LSPC) is a computationallyefficient alternative to kernel logistic regression. However, to assure its learned probabilities to be nonnegative, LSPC involves a postprocessing step of rounding up negative parameters to zero, which can unexpectedly influen ..."
Abstract
 Add to MetaCart
The leastsquares probabilistic classifier (LSPC) is a computationallyefficient alternative to kernel logistic regression. However, to assure its learned probabilities to be nonnegative, LSPC involves a postprocessing step of rounding up negative parameters to zero, which can unexpectedly
Learning and inference in probabilistic classifier chains with beam search
 In Proceedings of the European
, 2012
"... Abstract. Multilabel learning is an extension of binary classification that is both challenging and practically important. Recently, a method for multilabel learning called probabilistic classifier chains (PCCs) was proposed with numerous appealing properties, such as conceptual simplicity, flexibi ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract. Multilabel learning is an extension of binary classification that is both challenging and practically important. Recently, a method for multilabel learning called probabilistic classifier chains (PCCs) was proposed with numerous appealing properties, such as conceptual sim
A Scalable Probabilistic Classifier for Language Modeling
"... We present a novel probabilistic classifier, which scales well to problems that involve a large number of classes and require training on large datasets. A prominent example of such a problem is language modeling. Our classifier is based on the assumption that each feature is associated with a predi ..."
Abstract
 Add to MetaCart
We present a novel probabilistic classifier, which scales well to problems that involve a large number of classes and require training on large datasets. A prominent example of such a problem is language modeling. Our classifier is based on the assumption that each feature is associated with a
Results 1  10
of
141,265