Results 1  10
of
9,343
The Monotone Theory for the PACModel
 Inf. Comput
, 1996
"... We show that a DNF formula that has a CNF representation that contains at least one "1=polyheavy" clause with respect to a distribution D is weakly learnable under this distribution. So DNF that are not weakly learnable under the distribution D do not have any "1=polyheavy" ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
heavy" clauses in any of their CNF representations. We then show that CDNF, a DNF f that has a CNF representation that contains poly(n) clauses that approximates f according to a distribution D, is weakly + fflPAC learnable with membership queries under the distribution D.
.1 PAC Model
"... X, a class of concepts C containing functions mapping X to f0; 1g, and a target concept c 2 C. The learner's goal is to identify the target concept c. The model also includes a probability distribution D on X. The learner has access to a random oracle EX(c;D), which at the learner's reque ..."
Abstract
 Add to MetaCart
X, a class of concepts C containing functions mapping X to f0; 1g, and a target concept c 2 C. The learner's goal is to identify the target concept c. The model also includes a probability distribution D on X. The learner has access to a random oracle EX(c;D), which at the learner
Pac modelfree reinforcement learning
 In: ICML06: Proceedings of the 23rd international conference on Machine learning
, 2006
"... For a Markov Decision Process with finite state (size S) and action spaces (size A per state), we propose a new algorithm—Delayed QLearning. We prove it is PAC, achieving near optimal performance except for Õ(SA) timesteps using O(SA) space, improving on the Õ(S2 A) bounds of best previous algorith ..."
Abstract

Cited by 66 (13 self)
 Add to MetaCart
For a Markov Decision Process with finite state (size S) and action spaces (size A per state), we propose a new algorithm—Delayed QLearning. We prove it is PAC, achieving near optimal performance except for Õ(SA) timesteps using O(SA) space, improving on the Õ(S2 A) bounds of best previous
Notes on Learning with Irrelevant Attributes in the PAC Model
"... In these notes, we sketch some of our work on learning with irrelevant attributes in Valiant’s PAC model [V84]. In the PAC model, the goal of the learner is to produce an approximately cor ..."
Abstract
 Add to MetaCart
In these notes, we sketch some of our work on learning with irrelevant attributes in Valiant’s PAC model [V84]. In the PAC model, the goal of the learner is to produce an approximately cor
21 An Augmented PAC Model for SemiSupervised Learning
"... The standard PAClearning model has proven to be a useful theoretical framework for thinking about the problem of supervised learning. However, it does not tend to capture the assumptions underlying many semisupervised learning methods. In this chapter we describe an augmented version of the PAC mo ..."
Abstract
 Add to MetaCart
The standard PAClearning model has proven to be a useful theoretical framework for thinking about the problem of supervised learning. However, it does not tend to capture the assumptions underlying many semisupervised learning methods. In this chapter we describe an augmented version of the PAC
Epidemic Spreading in ScaleFree Networks
, 2000
"... The Internet, as well as many other networks, has a very complex connectivity recently modeled by the class of scalefree networks. This feature, which appears to be very efficient for a communications network, favors at the same time the spreading of computer viruses. We analyze real data from c ..."
Abstract

Cited by 575 (15 self)
 Add to MetaCart
The Internet, as well as many other networks, has a very complex connectivity recently modeled by the class of scalefree networks. This feature, which appears to be very efficient for a communications network, favors at the same time the spreading of computer viruses. We analyze real data from
An Augmented PAC Model for SemiSupervised Learning
 In
, 2005
"... that these numbers depend on. We provide examples of samplecomplexity bounds both for uniform convergence and #cover based algorithms, as well as several algorithmic results. 21.1 Introduction As we have already seen in the previous chapters, there has been growing interest in using unlabeled da ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
discriminative learning models do not really capture how and why unlabeled data can be of help. In particular, in the PAC model there is purposefully a complete disconnect between the data distribution D and the target function f being learned [Valiant, 1984, Blumer et al., 1989, Kearns and Vazirani, 1994
Finding Relevant Variables in PAC Model with Membership Queries
 Lecture Notes in Artificial Intelligence, 1720:313 – 322
, 1999
"... . A new research frontier in AI and data mining seeks to develop methods to automatically discover relevant variables among many irrelevant ones. In this paper, we present four algorithms that output such crucial variables in PAC model with membership queries. The first algorithm executes the ta ..."
Abstract
 Add to MetaCart
. A new research frontier in AI and data mining seeks to develop methods to automatically discover relevant variables among many irrelevant ones. In this paper, we present four algorithms that output such crucial variables in PAC model with membership queries. The first algorithm executes
Results 1  10
of
9,343