Results 1 - 10
of
51,436
Bottom-Up Relational Learning of Pattern Matching Rules for Information Extraction
, 2003
"... Information extraction is a form of shallow text processing that locates a specified set of relevant items in a natural-language document. Systems for this task require significant domain-specific knowledge and are time-consuming and difficult to build by hand, making them a good application for ..."
Abstract
-
Cited by 406 (20 self)
- Add to MetaCart
for machine learning. We present an algorithm, RAPIER, that uses pairs of sample documents and filled templates to induce pattern-match rules that directly extract fillers for the slots in the template. RAPIER is a bottom-up learning algorithm that incorporates techniques from several inductive logic
A Reduction Operator for Bottom-up Relational Learning with Bounded-Treewidth Hypotheses
, 2012
"... We introduce a novel relational-learning algorithm for learning first-order-logic clauses by means of Plotkin’s least general generalization operator. The algorithm employs our newly introduced polynomialtime bounded reduction in the place where the exponential-time theta-reduction is normally used ..."
Abstract
- Add to MetaCart
We introduce a novel relational-learning algorithm for learning first-order-logic clauses by means of Plotkin’s least general generalization operator. The algorithm employs our newly introduced polynomialtime bounded reduction in the place where the exponential-time theta-reduction is normally
Learning probabilistic relational models
- In IJCAI
, 1999
"... A large portion of real-world data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat " data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much ..."
Abstract
-
Cited by 613 (30 self)
- Add to MetaCart
A large portion of real-world data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat " data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much
Learning logical definitions from relations
- MACHINE LEARNING
, 1990
"... This paper describes FOIL, a system that learns Horn clauses from data expressed as relations. FOIL is based on ideas that have proved effective in attribute-value learning systems, but extends them to a first-order formalism. This new system has been applied successfully to several tasks taken fro ..."
Abstract
-
Cited by 935 (8 self)
- Add to MetaCart
This paper describes FOIL, a system that learns Horn clauses from data expressed as relations. FOIL is based on ideas that have proved effective in attribute-value learning systems, but extends them to a first-order formalism. This new system has been applied successfully to several tasks taken
Multitask Learning,”
, 1997
"... Abstract. Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for ..."
Abstract
-
Cited by 677 (6 self)
- Add to MetaCart
Abstract. Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned
Combining top-down and bottom-up segmentation
- In Proceedings IEEE workshop on Perceptual Organization in Computer Vision, CVPR
, 2004
"... In this work we show how to combine bottom-up and topdown approaches into a single figure-ground segmentation process. This process provides accurate delineation of object boundaries that cannot be achieved by either the topdown or bottom-up approach alone. The top-down approach uses object represen ..."
Abstract
-
Cited by 191 (2 self)
- Add to MetaCart
. The combination provides a final segmentation that draws on the relative merits of both approaches: The result is as close as possible to the top-down approximation, but is also constrained by the bottom-up process to be consistent with significant image discontinuities. We construct a global cost function
Reinforcement Learning I: Introduction
, 1998
"... In which we try to give a basic intuitive sense of what reinforcement learning is and how it differs and relates to other fields, e.g., supervised learning and neural networks, genetic algorithms and artificial life, control theory. Intuitively, RL is trial and error (variation and selection, search ..."
Abstract
-
Cited by 5614 (118 self)
- Add to MetaCart
In which we try to give a basic intuitive sense of what reinforcement learning is and how it differs and relates to other fields, e.g., supervised learning and neural networks, genetic algorithms and artificial life, control theory. Intuitively, RL is trial and error (variation and selection
Conversation as Experiential Learning
, 2005
"... This article proposes a framework relevant to the continuous learning of individuals and organizations. Drawing from the theory of experiential learning, the article proposes conversational learning as the experiential learning process occurring in conversation as learners construct meaning from t ..."
Abstract
-
Cited by 588 (9 self)
- Add to MetaCart
their experiences. A theoretical framework based on five process dialectics is proposed here as the foundational under-pinning of conversational learning. The five dialectics—apprehension and comprehension; reflection and action; epistemological discourse and ontological recourse; individuality and relationality
Learning to predict by the methods of temporal differences
- MACHINE LEARNING
, 1988
"... This article introduces a class of incremental learning procedures specialized for prediction – that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predi ..."
Abstract
-
Cited by 1521 (56 self)
- Add to MetaCart
, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods and they produce
The Symbol Grounding Problem
, 1990
"... There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the "symbol grounding problem": How can the semantic interpretation of a formal symbol system be m ..."
Abstract
-
Cited by 1084 (20 self)
- Add to MetaCart
Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) "iconic representations" , which are analogs of the proximal sensory projections of distal objects and events
Results 1 - 10
of
51,436