Results 1  10
of
981
Sequential PAC Learning
 In Proceedigs of COLT95
, 1995
"... We consider the use of "online" stopping rules to reduce the number of training examples needed to paclearn. Rather than collect a large training sample that can be proved sufficient to eliminate all bad hypotheses a priori, the idea is instead to observe training examples oneatatime ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
We consider the use of "online" stopping rules to reduce the number of training examples needed to paclearn. Rather than collect a large training sample that can be proved sufficient to eliminate all bad hypotheses a priori, the idea is instead to observe training examples one
PAC Learning with Simple Examples
, 1996
"... We define a new PAC learning model. In this model, examples are drawn according to the universal distribution m(: j f) of SolomomoffLevin, where f is the target concept. The consequence is that the simple examples of the target concept have a high probability to be provided to the learning algorith ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
We define a new PAC learning model. In this model, examples are drawn according to the universal distribution m(: j f) of SolomomoffLevin, where f is the target concept. The consequence is that the simple examples of the target concept have a high probability to be provided to the learning
Practical PAC Learning
 In Proceedings IJCAI95
, 1995
"... We present new strategies for "probably approximately correct" (pac) learning that use fewer training examples than previous approaches. The idea is to observe training examples oneatatime and decide "online" when to return a hypothesis, rather than collect a large fixedsize ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We present new strategies for "probably approximately correct" (pac) learning that use fewer training examples than previous approaches. The idea is to observe training examples oneatatime and decide "online" when to return a hypothesis, rather than collect a large fixed
Query, PACS and simplePAC Learning
, 1998
"... We study a distribution dependent form of PAC learning that uses probability distributions related to Kolmogorov complexity. We relate the PACS model, defined by Denis, D'Halluin and Gilleron in [3], with the standard simplePAC model and give a general technique that subsumes the results i ..."
Abstract
 Add to MetaCart
We study a distribution dependent form of PAC learning that uses probability distributions related to Kolmogorov complexity. We relate the PACS model, defined by Denis, D'Halluin and Gilleron in [3], with the standard simplePAC model and give a general technique that subsumes the results
PAC learning with positive examples
, 1998
"... Learning with positive examples only occurs very frequently in natural learning. But learning theories have always encountered a lot of difficulties with these situations. While learning with positive examples has been extensively studied in Gold framework, it does not exist satisfactory generalizat ..."
Abstract
 Add to MetaCart
generalization of PAC learning model to the general case of positiveonly examples. We make two proposals: the first one assumes that the learner has available an oracle with "two buttons" which respectively provide positive and unlabeled examples; the second one supposes that the class of possible
PAC Learning of Interleaved Melodies
, 1995
"... A number of algebraic models of music that use the interleaving or shuffle operator have been suggested in the literature. This paper shows how interleaving expressions are conducive to PAC identification. PAC learning theory states that a "probably approximately correct" hypothesis that f ..."
Abstract
 Add to MetaCart
A number of algebraic models of music that use the interleaving or shuffle operator have been suggested in the literature. This paper shows how interleaving expressions are conducive to PAC identification. PAC learning theory states that a "probably approximately correct" hypothesis
PAC Learning with Irrelevant Attributes
 In Proceedings of the IEEE symposium on Foundation of Computer Science
, 1994
"... We consider the problem of learning in the presence of irrelevant attributes in Valiant's PAC model [V84]. In the PAC model, the goal of the learner is to produce an approximately correct hypothesis from random sample data. If the number of relevant attributes in the target function is small, i ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
We consider the problem of learning in the presence of irrelevant attributes in Valiant's PAC model [V84]. In the PAC model, the goal of the learner is to produce an approximately correct hypothesis from random sample data. If the number of relevant attributes in the target function is small
PAC learning with nasty noise
 In Algorithmic Learning Theory, (ALT’99
, 1999
"... Abstract We introduce a new model for learning in the presence of noise, which we call the Nasty Noise model. This model generalizes previously considered models of learning with noise. The learning process in this model, which is a variant of the PAC model, proceeds as follows: Suppose that the le ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
Abstract We introduce a new model for learning in the presence of noise, which we call the Nasty Noise model. This model generalizes previously considered models of learning with noise. The learning process in this model, which is a variant of the PAC model, proceeds as follows: Suppose
Paclearning Nondeterminate Clauses
 IN PROCEEDINGS OF THE ELEVENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 1994
"... Several practical inductive logic programming systems efficiently learn "determinate" clauses of constant depth. Recently it has been shown that while nonrecursive constantdepth determinate clauses are paclearnable, most of the obvious syntactic generalizations of this language are ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Several practical inductive logic programming systems efficiently learn "determinate" clauses of constant depth. Recently it has been shown that while nonrecursive constantdepth determinate clauses are paclearnable, most of the obvious syntactic generalizations of this language
Pac Learning, Noise, and Geometry
"... This paper describes the probably approximately correct model of concept learning, paying special attention to the case where instances are points in Euclidean nspace. The problem of learning from noisy training data is also studied. Supported by NSF grant CCR9108753. Email: sloan@eecs.uic.edu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper describes the probably approximately correct model of concept learning, paying special attention to the case where instances are points in Euclidean nspace. The problem of learning from noisy training data is also studied. Supported by NSF grant CCR9108753. Email: sloan
Results 1  10
of
981