Results 11  20
of
6,442
On the time course of perceptual choice: the leaky competing accumulator model
 PSYCHOLOGICAL REVIEW
, 2001
"... The time course of perceptual choice is discussed in a model based on gradual and stochastic accumulation of information in nonlinear decision units with leakage (or decay of activation) and competition through lateral inhibition. In special cases, the model becomes equivalent to a classical diffus ..."
Abstract

Cited by 480 (19 self)
 Add to MetaCart
The time course of perceptual choice is discussed in a model based on gradual and stochastic accumulation of information in nonlinear decision units with leakage (or decay of activation) and competition through lateral inhibition. In special cases, the model becomes equivalent to a classical
LogLinear Interpolation of Language Models
, 2000
"... Building probabilistic models of language is a central task in natural language and speech processing allowing to integrate the syntactic and/or semantic (and recently pragmatic) constraints of the language into the systems. Probabilistic language models are an attractive alternative to the more tra ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Building probabilistic models of language is a central task in natural language and speech processing allowing to integrate the syntactic and/or semantic (and recently pragmatic) constraints of the language into the systems. Probabilistic language models are an attractive alternative to the more
Using Lexical Chains for Text Summarization
, 1997
"... We investigate one technique to produce a summary of an original text without requiring its full semantic interpretation, but instead relying on a model of the topic progression in the text derived from lexical chains. We present a new algorithm to compute lexical chains in a text, merging several r ..."
Abstract

Cited by 451 (9 self)
 Add to MetaCart
We investigate one technique to produce a summary of an original text without requiring its full semantic interpretation, but instead relying on a model of the topic progression in the text derived from lexical chains. We present a new algorithm to compute lexical chains in a text, merging several
Parsing the WSJ using CCG and loglinear models
 In Proceedings of the 42nd Meeting of the ACL
, 2004
"... This paper describes and evaluates loglinear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the LBFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new eff ..."
Abstract

Cited by 188 (22 self)
 Add to MetaCart
This paper describes and evaluates loglinear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the LBFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new
Widecoverage efficient statistical parsing with CCG and loglinear models
 COMPUTATIONAL LINGUISTICS
, 2007
"... This paper describes a number of loglinear parsing models for an automatically extracted lexicalized grammar. The models are "full" parsing models in the sense that probabilities are defined for complete parses, rather than for independent events derived by decomposing the parse tree. Dis ..."
Abstract

Cited by 218 (43 self)
 Add to MetaCart
This paper describes a number of loglinear parsing models for an automatically extracted lexicalized grammar. The models are "full" parsing models in the sense that probabilities are defined for complete parses, rather than for independent events derived by decomposing the parse tree
LogLinear Models for Label Ranking
, 2003
"... Label ranking is the task of inferring a total order over a predefined set of labels for each given instance. We present a general framework for batch learning of label ranking functions from supervised data. We assume that each instance in the training data is associated with a list of preferenc ..."
Abstract

Cited by 109 (5 self)
 Add to MetaCart
Label ranking is the task of inferring a total order over a predefined set of labels for each given instance. We present a general framework for batch learning of label ranking functions from supervised data. We assume that each instance in the training data is associated with a list of preferences over the labelset, however we do not assume that this list is either complete or consistent. This enables us to accommodate a variety of ranking problems. In contrast to the general form of the supervision, our goal is to learn a ranking function that induces a total order over the entire set of labels. Special cases of our setting are multilabel categorization and hierarchical classification. We present a general boostingbased learning algorithm for the label ranking problem and prove a lower bound on the progress of each boosting iteration. The applicability of our approach is demonstrated with a set of experiments on a largescale text corpus.
Scalable training of L1regularized loglinear models
 In ICML ’07
, 2007
"... The lbfgs limitedmemory quasiNewton method is the algorithm of choice for optimizing the parameters of largescale loglinear models with L2 regularization, but it cannot be used for an L1regularized loss due to its nondifferentiability whenever some parameter is zero. Efficient algorithms have ..."
Abstract

Cited by 178 (5 self)
 Add to MetaCart
The lbfgs limitedmemory quasiNewton method is the algorithm of choice for optimizing the parameters of largescale loglinear models with L2 regularization, but it cannot be used for an L1regularized loss due to its nondifferentiability whenever some parameter is zero. Efficient algorithms
An algorithm for pronominal anaphora resolution
 Computational Linguistics
, 1994
"... This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser, and relies on salience measures derived from ..."
Abstract

Cited by 391 (0 self)
 Add to MetaCart
This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser, and relies on salience measures derived
Contrastive estimation: Training loglinear models on unlabeled data
 In Proc. of ACL
, 2005
"... Conditional random fields (Lafferty et al., 2001) are quite effective at sequence labeling tasks like shallow parsing (Sha and Pereira, 2003) and namedentity extraction (McCallum and Li, 2003). CRFs are loglinear, allowing the incorporation of arbitrary features into the model. To train on unlabele ..."
Abstract

Cited by 160 (16 self)
 Add to MetaCart
Conditional random fields (Lafferty et al., 2001) are quite effective at sequence labeling tasks like shallow parsing (Sha and Pereira, 2003) and namedentity extraction (McCallum and Li, 2003). CRFs are loglinear, allowing the incorporation of arbitrary features into the model. To train
The log of Gravity
 THE REVIEW OF ECONOMICS AND STATISTICS
, 2005
"... Although economists have long been aware of Jensen's inequality, many econometric applications have neglected an important implication of it: the standard practice of interpreting the parameters of loglinearized models estimated by ordinary least squares as elasticities can be highly misleadin ..."
Abstract

Cited by 333 (6 self)
 Add to MetaCart
Although economists have long been aware of Jensen's inequality, many econometric applications have neglected an important implication of it: the standard practice of interpreting the parameters of loglinearized models estimated by ordinary least squares as elasticities can be highly
Results 11  20
of
6,442