Results 1  10
of
8,275
Stochastic Local Search in kterm DNF Learning
 Proc. 20th International Conf. on Machine Learning
, 2003
"... A novel native stochastic local search algorithm for solving kterm DNF problems is presented. It is evaluated on hard kterm DNF problems that lie on the phase transition and compared to the performance of GSAT and WalkSAT type algorithms on SAT encodings of kterm DNF problems. We also evaluate st ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
A novel native stochastic local search algorithm for solving kterm DNF problems is presented. It is evaluated on hard kterm DNF problems that lie on the phase transition and compared to the performance of GSAT and WalkSAT type algorithms on SAT encodings of kterm DNF problems. We also evaluate
On kterm DNF with the largest number of prime implicants
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 23 (2005)
, 2005
"... It is known that a kterm DNF can have at most 2 k − 1 prime implicants and this bound is sharp. We determine all kterm DNF having the maximal number of prime implicants. It is shown that a DNF is maximal if and only if it corresponds to a nonrepeating decision tree with literals assigned to the l ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
It is known that a kterm DNF can have at most 2 k − 1 prime implicants and this bound is sharp. We determine all kterm DNF having the maximal number of prime implicants. It is shown that a DNF is maximal if and only if it corresponds to a nonrepeating decision tree with literals assigned
Learning nearly monotone kterm DNF
, 1996
"... This note studies the learnability of the class kterm DNF with a bounded number of negations per term. We study the case of learning with membership queries alone, and give tight upper and lower bounds on the number of negations that makes the learning task feasible. We also prove a negative result ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
This note studies the learnability of the class kterm DNF with a bounded number of negations per term. We study the case of learning with membership queries alone, and give tight upper and lower bounds on the number of negations that makes the learning task feasible. We also prove a negative
Fast Learning of kTerm DNF Formulas with Queries
 In Proceedings of the Twenty Fourth Annual ACM Symposium on Theory of Computing
, 1992
"... This paper presents an algorithm that uses equivalence and membership queries to learn the class of kterm DNF formulas in time O(n \Delta k O(k) ), where n is the number of input variables. This algorithm allows one to learn DNF of O( log n log log n ) terms in polynomial time, and is the fi ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
This paper presents an algorithm that uses equivalence and membership queries to learn the class of kterm DNF formulas in time O(n \Delta k O(k) ), where n is the number of input variables. This algorithm allows one to learn DNF of O( log n log log n ) terms in polynomial time
Phase transitions and stochastic local search in kterm dnf learning
 Proc. ECML 2002
, 2002
"... Abstract. In the past decade, there has been a lot of interest in phase transitions within artificial intelligence, and more recently, in machine learning and inductive logic programming. We investigate phase transitions in learning kterm DNF boolean formulae, a practically relevant class of concep ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. In the past decade, there has been a lot of interest in phase transitions within artificial intelligence, and more recently, in machine learning and inductive logic programming. We investigate phase transitions in learning kterm DNF boolean formulae, a practically relevant class
Phase Transitions and Stochastic Local Search in kTerm DNF Learning
 Proc. ECML 2002
, 2002
"... In the past decade, there has been a lot of interest in phase transitions within artificial intelligence, and more recently, in machine learning and inductive logic programming. We investigate phase transitions in learning kterm DNF boolean formulae, a practically relevant class of concepts. We do ..."
Abstract
 Add to MetaCart
In the past decade, there has been a lot of interest in phase transitions within artificial intelligence, and more recently, in machine learning and inductive logic programming. We investigate phase transitions in learning kterm DNF boolean formulae, a practically relevant class of concepts. We do
A learning algorithm for monotone kterm DNF
"... Recently, Valiant introduced a computational model of learning, and gave a precice definition of learnabihty. Since then, much effort has been devoted to characterize learnable classes of concepts on this model. In this paper, we give, based on the uniform distribution model, a polynomial time algo ..."
Abstract
 Add to MetaCart
algorithm that learns kterm MDNF, the class of monotone disjunctive normal formulae with at most $k $ terms. This algorithm uses only positive examples and output hypothesis with onesidederror. This result should be contrasted with the fact that the same class is not learnable in the distribution free
Learning kterm DNF Formulas with an Incomplete Membership Oracle
 In Proc. 5th Annu. Workshop on Comput. Learning Theory
, 1992
"... We consider the problem of learning kterm DNF formulas using equivalence queries and incomplete membership queries as defined by Angluin and Slonim. We demonstrate that this model can be applied to nonmonotone classes. Namely, we describe a polynomialtime algorithm that exactly identifies a k t ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We consider the problem of learning kterm DNF formulas using equivalence queries and incomplete membership queries as defined by Angluin and Slonim. We demonstrate that this model can be applied to nonmonotone classes. Namely, we describe a polynomialtime algorithm that exactly identifies a k
ILP through Propositionalization and Stochastic kterm DNF learning
 In
, 2006
"... ILP has been successfully applied to a variety of tasks. Nevertheless, ILP systems have huge time and storage requirements, owing to a large search space of possible clauses. Therefore, clever search strategies are needed. One promising family of search strategies is that of stochastic local search ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
ILP has been successfully applied to a variety of tasks. Nevertheless, ILP systems have huge time and storage requirements, owing to a large search space of possible clauses. Therefore, clever search strategies are needed. One promising family of search strategies is that of stochastic local search methods. These methods have been successfully applied to propositional tasks, such as satisfiability, substantially improving their efficiency. Following the success of such methods, a promising research direction is to employ stochastic local search within ILP, to accelerate the runtime of the learning process. An investigation in that direction was recently performed within ILP [ ˇ Zelezn´y et al., 2004]. Stochastic local search algorithms for propositional satisfiability benefit from the ability to quickly test whether a truth assignment satisfies a formula. As a result, many possible solutions (assignments) can be tested and scored in a short time. In contrast, the analogous test within ILP—testing whether a clause covers an example—takes much longer, so that far fewer possible solutions can be tested in the same time. Therefore, motivated by both the sucess and limitations of the previous work, we also apply stochastic local search to ILP but in a different manner. Instead of directly applying stochastic local search to the space of firstorder Horn clauses, we use a propositionalization approach that transforms the ILP task into an attributevalue learning task. In this alternative search space, we can take advantage of fast testing as in propositional satisfiability. Our primary aim in this paper is to reduce ILP runtime. The standard greedy covering algorithm employed by most ILP systems is another shortcoming of typical ILP search. There is no guarantee that greedy covering will yield the globally optimal hypothesis; consequently, greedy covering often gives rise to problems such as unnecessarily long hypothesis with too
Psufficient statistics for PAC learning ktermDNF formulas through enumeration.
"... Working in the framework of PAClearning theory, we present special statistics for accomplishing in polynomial time proper learning of DNF boolean formulas having a fixed number of monomials. Our statistics turn out to be near sufficient for a large family of distribution lawsthat we call butter ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
sufficient statistics for learning kterm DNF. 1. INTRODUCTION Intuitively, learning in the PAC (Probably ...
Results 1  10
of
8,275