Results 1  10
of
22
Separateandconquer rule learning
 Artificial Intelligence Review
, 1999
"... This paper is a survey of inductive rule learning algorithms that use a separateandconquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of ..."
Abstract

Cited by 164 (29 self)
 Add to MetaCart
(Show Context)
This paper is a survey of inductive rule learning algorithms that use a separateandconquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases.
HYDRA: A Noisetolerant Relational Concept Learning Algorithm
 In Proceedings of the 8th International Workshop on Machine Learning
, 1993
"... Many learning algorithms form concept descriptions composed of clauses, each of which covers some proportion of the positive training data and a small to zero proportion of the negative training data. This paper presents a method using likelihood ratios attached to clauses to classify test exam ..."
Abstract

Cited by 70 (5 self)
 Add to MetaCart
Many learning algorithms form concept descriptions composed of clauses, each of which covers some proportion of the positive training data and a small to zero proportion of the negative training data. This paper presents a method using likelihood ratios attached to clauses to classify test examples. One concept description is learned for each class. Each concept description competes to classify the test example using the likelihood ratios assigned to clauses of that concept description. By testing on several artificial and "real world" domains, we demonstrate that attaching weights and allowing concept descriptions to compete to classify examples reduces an algorithm's susceptibility to noise.
Macro and micro perspectives of multistrategy learning
 Harpers Ferry, WV. Center for Artificial Intelligence, George Mason University
"... Macro and micro perspectives of multistrategy learning ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
Macro and micro perspectives of multistrategy learning
MultiStrategy Learning and Theory Revision
, 1993
"... This paper presents the system WHY, which learns and updates a diagnostic knowledge base using domain knowledge and a set of examples. The apriori knowledge consists of a causal model of the domain, stating the relationships among basic phenomena, and a body of phenomenological theory, describing t ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
This paper presents the system WHY, which learns and updates a diagnostic knowledge base using domain knowledge and a set of examples. The apriori knowledge consists of a causal model of the domain, stating the relationships among basic phenomena, and a body of phenomenological theory, describing the links between abstract concepts and their possible manifestations in the world. The phenomenological knowledge is used deductively, the causal model is used abductively and the examples are used inductively. The problems of imperfection and intractability of the theory are handled by allowing the system to make assumptions during its reasoning. In this way, robust knowledge can be learned with limited complexity and limited number of examples. The system works in a first order logic environment and has been applied in a real domain. 2 1. Introduction Several authors have advocated the necessity of using deep models of the structure and behaviour of the entities involved in a given doma...
A RuleBased Similarity Measure
, 1993
"... . An inductionbased method for retrieving similar cases and/or easily adaptable cases is presented in a 3steps process : first, a rule set is learned from a data set ; second, a reformulation of the problem domain is derived from this ruleset ; third, a surface similarity with respect to the refo ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
. An inductionbased method for retrieving similar cases and/or easily adaptable cases is presented in a 3steps process : first, a rule set is learned from a data set ; second, a reformulation of the problem domain is derived from this ruleset ; third, a surface similarity with respect to the reformulated problem appears to be a structural similarity with respect to the initial representation of the domain. This method achieves some integration between machine learning and casebased reasoning : it uses both compiled knowledge (through the similarity measure and the ruleset it is derived from) and instanciated knowledge (through the cases). 1 Introduction In CaseBased Reasoning (CBR), the first step is retrieving cases similar to the current one among the case base. The success of the next steps, e.g. reusing the retrieved cases to achieve the current goal, and retaining from this experience, heavily depends on the quality of the retrieval phase [1]. On the other hand, the retrievi...
A ConstraintBased Induction Algorithm in FOL
, 1994
"... We present a bottomup generalization which builds the maximally general terms covering a positive example and rejecting negative examples in firstorder logic (FOL), i.e. in terms of Version Spaces, the set G. This algorithm is based on rewriting negative examples as constraints upon the generaliza ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We present a bottomup generalization which builds the maximally general terms covering a positive example and rejecting negative examples in firstorder logic (FOL), i.e. in terms of Version Spaces, the set G. This algorithm is based on rewriting negative examples as constraints upon the generalization of the positive example at hand. The constraints space is partially ordered, inducing a partial order on negative examples ; the nearmisses as defined by Winston can then be formalized in FOL as negative examples minimal with respect to this partial order. As expected, only nearmisses are necessary to build the set G. Moreover, constraints can be used directly to classify further examples. 1 INTRODUCTION Recently, the number of empirical induction algorithms in first order logic (FOL) is rapidly increased : to cite but a few, see FOIL (Quinlan 90, Quinlan 93), MLSmart (Bergadano et al. 88, Botta Giordana 93), Golem (Muggleton Feng 90), KBG (Bisson 90), (Bisson 92)... We focus on dis...
Measuring Quality Of Concept Descriptions
, 1988
"... An important aspect of any learning method is an evaluation of the learned knowledge, in particular, an evaluation of the plausibility and usefulness of concept descriptions that are being created. This paper presents a new, general method for evaluating concept descriptions. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
An important aspect of any learning method is an evaluation of the learned knowledge, in particular, an evaluation of the plausibility and usefulness of concept descriptions that are being created. This paper presents a new, general method for evaluating concept descriptions.
Learning Fuzzy Concept Definition
 Proc. 2nd IEEE Int. Conf. on Fuzzy Systems , IEEE Press (San Francisco, CA
, 1993
"... ..."
(Show Context)
Learning Relations: an Evaluation of Search Strategies
 Fundamenta Informaticae
, 1993
"... . Inducing concept descriptions in first order logic is inherently a complex task; then, heuristics are needed to keep the problem to manageable size. In this paper we explore the effect of alternative search strategies, including the use of information gain and of apriori knowledge, on the quality ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
. Inducing concept descriptions in first order logic is inherently a complex task; then, heuristics are needed to keep the problem to manageable size. In this paper we explore the effect of alternative search strategies, including the use of information gain and of apriori knowledge, on the quality of the acquired relations, intended as the ability to reconstruct the rule used to generate the examples. To this aim, an artificial domain has been created, in which the experimental conditions can be kept under control, the "solution" of the learning problem is known and a perfect theory is available. Another investigated aspect is the impact of more complex description languages, such as, for instance, including numerical quantifiers. The results show that the information gain criterion is too greedy to be useful when the concepts have a complex internal structure; however, this drawback is more or less shared with any purely statistical evaluation criterion. The addition of parts of the...