Results 21  30
of
16,695
Statistical DecisionTree Models for Parsing
 In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics
, 1995
"... Syntactic natural language parsers have shown themselves to be inadequate for processing highlyambiguous largevocabulary text, as is evidenced by their poor per formance on domains like the Wall Street Journal, and by the movement away from parsingbased approaches to textprocessing in gen ..."
Abstract

Cited by 367 (1 self)
 Add to MetaCart
in general. In this paper, I describe SPATTER, a statistical parser based on decisiontree learning techniques which constructs a complete parse for every sentence and achieves accuracy rates far better than any published result. This work is based on the following premises: (1) grammars are too
Back to Bentham? Explorations of Experienced Utility
 QUARTERLY JOURNAL OF ECONOMICS
, 1997
"... Two core meanings of “utility” are distinguished. “Decision utility” is the weight of an outcome in a decision. “Experienced utility” is hedonic quality, as in Bentham’s usage. Experienced utility can be reported in real time (instant utility), or in retrospective evaluations of past episodes (remem ..."
Abstract

Cited by 429 (28 self)
 Add to MetaCart
Two core meanings of “utility” are distinguished. “Decision utility” is the weight of an outcome in a decision. “Experienced utility” is hedonic quality, as in Bentham’s usage. Experienced utility can be reported in real time (instant utility), or in retrospective evaluations of past episodes
Policy gradient methods for reinforcement learning with function approximation.
 In NIPS,
, 1999
"... Abstract Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly repres ..."
Abstract

Cited by 439 (20 self)
 Add to MetaCart
policy. Large applications of reinforcement learning (RL) require the use of generalizing function approximators such neural networks, decisiontrees, or instancebased methods. The dominant approach for the last decade has been the valuefunction approach, in which all function approximation effort goes
Golden Eggs and Hyperbolic Discounting
 Quarterly Journal of Economics
, 1997
"... Hyperbolic discount functions induce dynamically inconsistent preferences, implying a motive for consumers to constrain their own future choices. This paper analyzes the decisions of a hyperbolic consumer who has access to an imperfect commitment technology: an illiquid asset whose sale must be init ..."
Abstract

Cited by 433 (14 self)
 Add to MetaCart
Hyperbolic discount functions induce dynamically inconsistent preferences, implying a motive for consumers to constrain their own future choices. This paper analyzes the decisions of a hyperbolic consumer who has access to an imperfect commitment technology: an illiquid asset whose sale must
Complexity of finding embeddings in a ktree
 SIAM JOURNAL OF DISCRETE MATHEMATICS
, 1987
"... A ktree is a graph that can be reduced to the kcomplete graph by a sequence of removals of a degree k vertex with completely connected neighbors. We address the problem of determining whether a graph is a partial graph of a ktree. This problem is motivated by the existence of polynomial time al ..."
Abstract

Cited by 386 (1 self)
 Add to MetaCart
status of two problems related to finding the smallest number k such that a given graph is a partial ktree. First, the corresponding decision problem is NPcomplete. Second, for a fixed (predetermined) value of k, we present an algorithm with polynomially bounded (but exponential in k) worst case time
Statistical Parsing with a Contextfree Grammar and Word Statistics
, 1997
"... We describe a parsing system based upon a language model for English that is, in turn, based upon assigning probabilities to possible parses for a sentence. This model is used in a parsing system by finding the parse for the sentence with the highest probability. This system outperforms previou ..."
Abstract

Cited by 414 (18 self)
 Add to MetaCart
explain their relative performance. Introduction We present a statistical parser that induces its grammar and probabilities from a handparsed corpus (a treebank). Parsers induced from corpora are of interest both as simply exercises in machine learning and also because they are often the best parsers
The Foundations of CostSensitive Learning
 In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence
, 2001
"... This paper revisits the problem of optimal learning and decisionmaking when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically i ..."
Abstract

Cited by 402 (6 self)
 Add to MetaCart
that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a
Rademacher and Gaussian complexities: risk bounds and structural results
 Journal of Machine Learning Research
, 2002
"... We investigate the use of certain datadependent estimates of the complexity of a function class, called Rademacher and gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinati ..."
Abstract

Cited by 395 (12 self)
 Add to MetaCart
as combinations of functions from basis classes and show how the Rademacher and gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes. We give examples of the application of these techniques in finding datadependent risk bounds for decision trees, neural
Inferring decision trees using the minimum description length principle
, 1989
"... We explore the use of Rissanen’s minimum description length principle for the construction of decision trees. Empirical results comparing this approach to other methods are given. ..."
Abstract

Cited by 322 (7 self)
 Add to MetaCart
We explore the use of Rissanen’s minimum description length principle for the construction of decision trees. Empirical results comparing this approach to other methods are given.
Incremental Induction of Decision Trees
, 1989
"... This article presents an incremental algorithm for inducing decision trees equivalent to those formed by Quinlan's nonincremental ID3 algorithm, given the same training instances. The new algorithm, named ID5R, lets one apply the ID3 induction process to learning tasks in which training inst ..."
Abstract

Cited by 198 (3 self)
 Add to MetaCart
This article presents an incremental algorithm for inducing decision trees equivalent to those formed by Quinlan's nonincremental ID3 algorithm, given the same training instances. The new algorithm, named ID5R, lets one apply the ID3 induction process to learning tasks in which training
Results 21  30
of
16,695