Results 1  10
of
10
Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm
 Machine Learning
, 1988
"... learning Boolean functions, linearthreshold algorithms Abstract. Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in which the learner responds to each ex ..."
Abstract

Cited by 780 (5 self)
 Add to MetaCart
learning Boolean functions, linearthreshold algorithms Abstract. Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in which the learner responds to each example according to a current hypothesis. Then the learner updates the hypothesis, if necessary, based on the correct classification of the example. One natural measure of the quality of learning in this setting is the number of mistakes the learner makes. For suitable classes of functions, learning algorithms are available that make a bounded number of mistakes, with the bound independent of the number of examples seen by the learner. We present one such algorithm that learns disjunctive Boolean functions, along with variants for learning other classes of Boolean functions. The basic method can be expressed as a linearthreshold algorithm. A primary advantage of this algorithm is that the number of mistakes grows only logarithmically with the number of irrelevant attributes in the examples. At the same time, the algorithm is computationally efficient in both time and space. 1.
Selection of relevant features and examples in machine learning
 ARTIFICIAL INTELLIGENCE
, 1997
"... In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant features, and the problem of selecting relevant examples. We describe the advances that have been mad ..."
Abstract

Cited by 590 (2 self)
 Add to MetaCart
(Show Context)
In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant features, and the problem of selecting relevant examples. We describe the advances that have been made on these topics in both empirical and theoretical work in machine learning, and we present a general framework that we use to compare different methods. We close with some challenges for future work in this area.
Relevant examples and relevant features: Thoughts from computational learning theory
 In AAAI94 Fall Symposium, Workshop on Relevance, 1994. [DF95] [DF99] [Jac94] [Kea98
, 1993
"... It can be said that nearly all results in machine learning, whether experimental or theoretical, deal with problems of separating relevant from irrelevant information in some way. In this paper I will attempt to ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
(Show Context)
It can be said that nearly all results in machine learning, whether experimental or theoretical, deal with problems of separating relevant from irrelevant information in some way. In this paper I will attempt to
Generativity and Systematicity in Neural Network Combinatorial Learning
, 1993
"... This thesis addresses a set of problems faced by connectionist learning that have originated from the observation that connectionist cognitive models lack two fundamental properties of the mind: Generativity, stemming from the boundless cognitive competence one can exhibit, and systematicity, due to ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
This thesis addresses a set of problems faced by connectionist learning that have originated from the observation that connectionist cognitive models lack two fundamental properties of the mind: Generativity, stemming from the boundless cognitive competence one can exhibit, and systematicity, due to the existence of symmetries within them. Such properties have seldom been seen in neural networks models, which have typically suffered from problems of inadequate generalization, as examplified both by small number of generalizations relative to training set sizes and heavy interference between newly learned items and previously learned information. Symbolic theories, arguing that mental representations have syntactic and semantic structure built from structured combinations of symbolic constituents, can in principle account for these properties (both arise from the sensitivity of structured semantic content with a generative and systematic syntax). This thesis studies the question of whe...
What do Constructive Learners Really Learn?
 ARTIFICIAL INTELLIGENCE REVIEW
, 1998
"... In constructive induction (CI), the learner's problem representation is modified as a normal part of the learning process. This may be necessary if the initial representation is inadequate or inappropriate. However, the distinction between constructive and nonconstructive methods appears to be ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In constructive induction (CI), the learner's problem representation is modified as a normal part of the learning process. This may be necessary if the initial representation is inadequate or inappropriate. However, the distinction between constructive and nonconstructive methods appears to be highly ambiguous. Several conventional definitions of the process of constructive induction appear to include all conceivable learning processes. In this paper I argue that the process of constructive learning should be identified with that of relational learning (i.e., I suggest that what constructive learners really learn is relationships) and I describe some of the possible benefits that might be obtained as a result of adopting this definition.
Unsupervised Constructive Learning
, 1997
"... In constructive induction (CI), the learner's problem representation is modified as a normal part of the learning process. This is useful when the initial representation is inadequate or inappropriate. In this paper, I argue that the distinction between constructive and nonconstructive methods ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
In constructive induction (CI), the learner's problem representation is modified as a normal part of the learning process. This is useful when the initial representation is inadequate or inappropriate. In this paper, I argue that the distinction between constructive and nonconstructive methods is unclear. I propose a theoretical model which allows (a) a clean distinction to be made and (b) the process of CI to be properly motivated. I also show that although constructive induction has been used almost exclusively in the context of supervised learning, there is no reason why it cannot form a part of an unsupervised regime.
Average Case Analysis of kCNF and kDNF learning algorithms
, 1994
"... We present average case models of algorithms for learning Conjunctive Normal Form (CNF, i.e., conjunctions of disjunctions) and Disjunctive Normal Form (DNF, i.e., disjunctions of conjunctions). Our goal is to predict the expected error of the learning algorithm as a function of the number n of trai ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present average case models of algorithms for learning Conjunctive Normal Form (CNF, i.e., conjunctions of disjunctions) and Disjunctive Normal Form (DNF, i.e., disjunctions of conjunctions). Our goal is to predict the expected error of the learning algorithm as a function of the number n of training examples, averaging over all sequences of n training examples. We show that our average case models accurately predict the expected error and demonstrate that the analysis can lead to insight into the behavior of the algorithm and the factors that a ect the error. 1
Queries and Concept Learning
, 1987
"... Abstract. We consider the problem of using queries to learn an unknown concept. Several types of queries are described and studied: membership, equivalence, subset, superset, disjointness, and exhaustiveness queries. Examples are given of efficient learning methods using various subsets of these qu ..."
Abstract
 Add to MetaCart
Abstract. We consider the problem of using queries to learn an unknown concept. Several types of queries are described and studied: membership, equivalence, subset, superset, disjointness, and exhaustiveness queries. Examples are given of efficient learning methods using various subsets of these queries for formal domains, including the regular languages, restricted classes of contextfree languages, the pattern languages, and restricted types of prepositional formulas. Some general lower bound techniques are given. Equivalence queries are compared with Valiant's criterion of probably approximately correct identification under random sampling. 1.
IEEE TRANSACTIONS ON KN(IWLEDGE AND DATA ENGINEERING, VOL. 5, NO. 1, FEBRUARY 1993 29 DataDriven Discovery of Quantitative
 IEEE Trans. Knowledge and Data Engineering
, 1993
"... A quantitative rule is a rule associated with quantitative information which assesses the representativehess of the rule in the database. In this paper, an efficient induction method is developed for learuing quantitative rules in relational databases. With the assistance of knowledge about concept ..."
Abstract
 Add to MetaCart
A quantitative rule is a rule associated with quantitative information which assesses the representativehess of the rule in the database. In this paper, an efficient induction method is developed for learuing quantitative rules in relational databases. With the assistance of knowledge about concept hierarchies, data relevance, and expected rule forms, attributeoriented induction can be performed on the database, which integrates database operations with the learuing process and provides a simple, efficient way of learning quantitative rules from large databases. Our method learns both characteristic rules and classification rules. Quantitative information facilitates quantitative reasoning, incremental learning, and learuing in the presence of noise. Moreover, learuing qualitative rules can be treated as a special case of learuing quantitative rules. Our paper shows that attributeoriented induction provides an efficient and effective mechanism for learning various kinds of knowledge rules from relational databases.
Michael Kearns Harvard University Recent Results on Boolean Concept Learning
"... Recently, a new formal model of learnability was introduced [23]. The model is applicable to practical learning systems because it requires the learning algorithm to be feasibly computable, yet at the same time demands only that the algorithm nd an approximation to the unknown rule. We survey recent ..."
Abstract
 Add to MetaCart
(Show Context)
Recently, a new formal model of learnability was introduced [23]. The model is applicable to practical learning systems because it requires the learning algorithm to be feasibly computable, yet at the same time demands only that the algorithm nd an approximation to the unknown rule. We survey recent results in this new area of theoretical induction, giving both positive (learnability) and negative (nonlearnability) results, as well as outlining useful techniques for proving learnability. Our main focus is the application of the model to the problem of learning boolean formulae. 1