Results 1  10
of
268
Learning Decision Lists
, 2001
"... This paper introduces a new representation for Boolean functions, called decision lists, and shows that they are efficiently learnable from examples. More precisely, this result is established for \kDL" { the set of decision lists with conjunctive clauses of size k at each decision. Since k ..."
Abstract

Cited by 427 (0 self)
 Add to MetaCart
This paper introduces a new representation for Boolean functions, called decision lists, and shows that they are efficiently learnable from examples. More precisely, this result is established for \kDL" { the set of decision lists with conjunctive clauses of size k at each decision. Since
A Syntactic Characterization of BoundedRank Decision Trees in Terms of Decision Lists
, 1997
"... We define syntactically a subclass of decision lists (treelike decision lists) and we show its equivalence with the class of bounded rank decision trees. As a byproduct, the main theorem provides an alternate and easier proof of the Blum's containement Theorem [1]. Furthermore we give an ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
an inversion procedure for Blum's derivation of a decision list from a bounded rank decision tree. 1 Introduction Decision lists have been introduced by Rivest in [3] as a representation of boolean functions. He showed that kdecision lists, i.e. decision lists in which any term has at most k literals
Computational Sample Complexity and AttributeEfficient Learning
, 2000
"... Two fundamental measures of the efficiency of a learning algorithm are its running time and the number of examples it requires (its sample complexity). In this paper we demonstrate that even for simple concept classes, an inherent tradeoff can exist between running time and sample complexity. We ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
class of kdecision lists which exhibits a similar but stronger gap in sample complexity. These results strengthen the results of Decatur, Goldreich and Ron [9] on distributionfree computational sample complexity and come within a logarithmic factor of the largest possible gap for concept classes
Learning Monotone Term Decision Lists
, 1997
"... We study the learnability of monotone term decision lists in the exact model of equivalence and membership queries. We show that, for any constant k 0, kterm monotone decision lists are exactly and properly learnable with n O(k) membership queries in O(n k 3 ) time. We also show n \Omega\Gam ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Colt) , and the Spanish DGICYT (project PB920709) 2 Supported by grant FP93 13717942 from the Spanish Government 3 This work was supported by NSF grant CCR9510392. 1 Introduction Decision lists were introduced by Rivest [Riv87], who showed that the class of kdecision lists is properly PAC
On the Computational Power of Boolean Decision Lists
 In Proceedings of the 19th Annual Symposium of Theoretical Aspects of Computer Science (STACS
, 2002
"... We study the computational power of decision lists over ANDfunctions versus threshold circuits. ANDdecision lists are a natural generalization of formulas in disjunctive or conjunctive normal form. We show that, in contrast to CNF and DNFformulas, there are functions with small ANDdecision ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
is that for all k 1 the complexity class de ned by polynomial length AC k decision lists lies strictly between AC k+1 and AC k+2 .
1 Abstract: Development of BlockStacking TeleoReactive Programs Using Genetic Programming
"... This paper describes the development of TeleoReactive (TR) blockstacking programs using genetic programming. TR programs are a class of programs that are also a specific form of kdecision lists, and are useful for programming tasks for autonomous agents. Using only the predicates of On, a test ..."
Abstract
 Add to MetaCart
This paper describes the development of TeleoReactive (TR) blockstacking programs using genetic programming. TR programs are a class of programs that are also a specific form of kdecision lists, and are useful for programming tasks for autonomous agents. Using only the predicates of On, a test
PAC Learning from Positive Statistical Queries
 Proc. 9th International Conference on Algorithmic Learning Theory  ALT ’98
, 1998
"... . Learning from positive examples occurs very frequently in natural learning. The PAC learning model of Valiant takes many features of natural learning into account, but in most cases it fails to describe such kind of learning. We show that in order to make the learning from positive data possible, ..."
Abstract

Cited by 55 (3 self)
 Add to MetaCart
93]) and constantpartition classification noise model ([Dec97]) are studied. We show that kDNF and kdecision lists are learnable in both models, i.e. with far less information than it is assumed in previously used algorithms. 1 Introduction The PAC learning model of Valiant ([Val84]) has become
Learning with Restricted Focus of Attention
, 1997
"... We consider learning tasks in which the learner faces restrictions on the amount of information he can extract from each example he encounters. We introduce a formal framework for the analysis of such scenarios. We call it RFA (Restricted Focus of Attention) learning. While being a natural refine ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
RFA learnability of richer classes of Boolean functions (such as kdecision lists) with respect to a given distribution, and the efficient (n \Gamma 1)RFA learnability (for fixed n), under product distributions, of classes of subsets of R n which are defined by mild surfaces. ...
AttributeEfficient Learning and WeightDegree Tradeoffs for Polynomial Threshold Functions
"... We study the challenging problem of learning decision lists attributeefficiently, giving both positive and negative results. Our main positive result is a new tradeoff between the running time and mistake bound for learning lengthk decision lists over n Boolean variables. When the allowed running ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
We study the challenging problem of learning decision lists attributeefficiently, giving both positive and negative results. Our main positive result is a new tradeoff between the running time and mistake bound for learning lengthk decision lists over n Boolean variables. When the allowed running
Results 1  10
of
268