Results 1 - 10
of
17,000
Possibilistic Instance-Based Learning
"... A method of instance-based learning is introduced which makes use of possibility theory and fuzzy sets. Particularly, a possibilistic version of the similarity-guided extrapolation principle underlying the instancebased learning paradigm is proposed. This version is compared to the commonly used ..."
Abstract
-
Cited by 8 (3 self)
- Add to MetaCart
used probabilistic approach from a methodological point of view. Moreover, aspects of knowledge representation such as the modeling of uncertainty are discussed. Taking the possibilistic extrapolation principle as a point of departure, an instance-based learning procedure is outlined which includes
Instance-based learning algorithms
- Machine Learning
, 1991
"... Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to ..."
Abstract
-
Cited by 1389 (18 self)
- Add to MetaCart
to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances
Greedy Randomized Adaptive Search Procedures
, 2002
"... GRASP is a multi-start metaheuristic for combinatorial problems, in which each iteration consists basically of two phases: construction and local search. The construction phase builds a feasible solution, whose neighborhood is investigated until a local minimum is found during the local search phas ..."
Abstract
-
Cited by 647 (82 self)
- Add to MetaCart
solution construction mechanisms and techniques to speed up the search are also described: Reactive GRASP, cost perturbations, bias functions, memory and learning, local search on partially constructed solutions, hashing, and filtering. We also discuss in detail implementation strategies of memory-based
Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions
- IN ICML
, 2003
"... An approach to semi-supervised learning is proposed that is based on a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The learning ..."
Abstract
-
Cited by 752 (14 self)
- Add to MetaCart
An approach to semi-supervised learning is proposed that is based on a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The learning
Learning probabilistic relational models
- In IJCAI
, 1999
"... A large portion of real-world data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat " data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much ..."
Abstract
-
Cited by 613 (30 self)
- Add to MetaCart
of the dependency structure in a model. Moreover, we show how the learning procedure can exploit standard database retrieval techniques for efficient learning from large datasets. We present experimental results on both real and synthetic relational databases. 1
A learning algorithm for Boltzmann machines
- Cognitive Science
, 1985
"... The computotionol power of massively parallel networks of simple processing elements resides in the communication bandwidth provided by the hardware connections between elements. These connections con allow a significant fraction of the knowledge of the system to be applied to an instance of a probl ..."
Abstract
-
Cited by 584 (13 self)
- Add to MetaCart
The computotionol power of massively parallel networks of simple processing elements resides in the communication bandwidth provided by the hardware connections between elements. These connections con allow a significant fraction of the knowledge of the system to be applied to an instance of a
Distance metric learning, with application to clustering with sideinformation,”
- in Advances in Neural Information Processing Systems 15,
, 2002
"... Abstract Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many "plausible" ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for ..."
Abstract
-
Cited by 818 (13 self)
- Add to MetaCart
to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in Ê Ò , learns a distance metric over Ê Ò that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows
Support Vector Machine Active Learning with Applications to Text Classification
- JOURNAL OF MACHINE LEARNING RESEARCH
, 2001
"... Support vector machines have met with significant success in numerous real-world learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using pool-based acti ..."
Abstract
-
Cited by 735 (5 self)
- Add to MetaCart
Support vector machines have met with significant success in numerous real-world learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using pool-based
Learning to predict by the methods of temporal differences
- MACHINE LEARNING
, 1988
"... This article introduces a class of incremental learning procedures specialized for prediction – that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predi ..."
Abstract
-
Cited by 1521 (56 self)
- Add to MetaCart
This article introduces a class of incremental learning procedures specialized for prediction – that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between
Some studies in machine learning using the game of Checkers
- IBM JOURNAL OF RESEARCH AND DEVELOPMENT
, 1959
"... Two machine-learning procedures have been investigated in some detail using the game of checkers. Enough work has been done to verify the fact that a computer can be programmed so that it will learn to play a better game of checkers than can be played by the person who wrote the program. Furthermor ..."
Abstract
-
Cited by 780 (0 self)
- Add to MetaCart
Two machine-learning procedures have been investigated in some detail using the game of checkers. Enough work has been done to verify the fact that a computer can be programmed so that it will learn to play a better game of checkers than can be played by the person who wrote the program
Results 1 - 10
of
17,000