• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 120,099
Next 10 →

A New Statistical Parser Based on Bigram Lexical Dependencies

by Michael John Collins , 1996
"... This paper describes a new statistical parser which is based on probabilities of dependencies between head-words in the parse tree. Standard bigram probability estimation techniques are extended to calculate probabilities of dependencies between pairs of words. Tests using Wall Street Journal ..."
Abstract - Cited by 491 (4 self) - Add to MetaCart
This paper describes a new statistical parser which is based on probabilities of dependencies between head-words in the parse tree. Standard bigram probability estimation techniques are extended to calculate probabilities of dependencies between pairs of words. Tests using Wall Street

Parse reranking based on higher-order lexical dependencies

by Zhiguo Wang, Chengqing Zong - In Proceedings of 5th International Joint Conference on Natural Language Processing , 2011
"... Existing work shows that lexical dependencies are helpful for constituent tree parsing. However, only first-order lexical dependencies ..."
Abstract - Cited by 6 (2 self) - Add to MetaCart
Existing work shows that lexical dependencies are helpful for constituent tree parsing. However, only first-order lexical dependencies

Strictly lexical dependency parsing

by Qin Iris Wang, Dale Schuurmans, Dekang Lin - In Proc. IWPT , 2005
"... We present a strictly lexical parsing model where all the parameters are based on the words. This model does not rely on part-of-speech tags or grammatical categories. It maximizes the conditional probability of the parse tree given the sentence. This is in contrast with most previous models that co ..."
Abstract - Cited by 14 (4 self) - Add to MetaCart
We present a strictly lexical parsing model where all the parameters are based on the words. This model does not rely on part-of-speech tags or grammatical categories. It maximizes the conditional probability of the parse tree given the sentence. This is in contrast with most previous models

Dynamic TAG and lexical dependencies

by Alessandro Mazzei, Vincenzo Lombardo, Università Di Torino, Patrick Sturt
"... Incrementality is a widely held assumption that constrains the language processor to parse the input words from left to right, and to carry out a semantic interpretation of the partial structures (Marslen-Wilson, 1973). The detailed specification of the incremental syntactic process is often address ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
Incrementality is a widely held assumption that constrains the language processor to parse the input words from left to right, and to carry out a semantic interpretation of the partial structures (Marslen-Wilson, 1973). The detailed specification of the incremental syntactic process is often addressed by assuming a parsimonious version of incrementality that we can call strong connectivity (Stabler, 1994). Strong connectivity constrains the syntactic processor to maintain a fully connected structure throughout the whole process and is supported by some psycholinguistic evidence (Kamide et al., 2003). In this paper we describe a constituency based dynamic grammar (cf. Milward (1994)), called Dynamic Version of TAG (DVTAG), that fulfills the strong connectivity hypothesis. Similarly to LTAG, a DVTAG consists of a set of elementary trees and a number of attachment operations for combining them (Joshi and Schabes, 1997). DVTAG fulfills the strong connectivity hypothesis by constraining the derivation process to be a series of steps in which an elementary tree is combined with the partial tree spanning the left fragment of the sentence. The result of a step is an updated partial structure called left-context. Specifically, at the processing step i, the elementary tree anchored by the i-th word in the sentence is combined with the partial structure spanning the words from positions 1 to i − 1; the result is a partial structure spanning the words from 1 to i. Figure 1-a shows a DVTAG derivation for the sentence John

Minimally Lexicalized Dependency Parsing

by Daisuke Kawahara, Kiyotaka Uchimoto
"... Dependency structures do not have the information of phrase categories in phrase structure grammar. Thus, dependency parsing relies heavily on the lexical information of words. This paper discusses our investigation into the effectiveness of lexicalization in dependency parsing. Specifically, by res ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Dependency structures do not have the information of phrase categories in phrase structure grammar. Thus, dependency parsing relies heavily on the lexical information of words. This paper discusses our investigation into the effectiveness of lexicalization in dependency parsing. Specifically

Verb Semantics And Lexical Selection

by Zhibiao Wu , 1994
"... ... structure. As Levin has addressed (Levin 1985), the decomposition of verbs is proposed for the purposes of accounting for systematic semantic-syntactic correspondences. This results in a series of problems for MT systems: inflexible verb sense definitions; difficulty in handling metaphor and new ..."
Abstract - Cited by 520 (4 self) - Add to MetaCart
and new usages; imprecise lexical selection and insufficient system coverage. It seems one approach is to apply probability methods and statistical models for some of these problems. However, the question reminds: has PSR exhausted the potential of the knowledge-based approach? If not, are there any

Using Tri-lexical Dependencies in LFG Parse Disambiguation

by Aoife Cahill, Uli Heid, Christian Rohrer, Marion Weller
"... The use of lexical dependencies in parse disambiguation is not a new idea. For example, it is one of the key ideas behind the Collins (1999) parser. In that parser, bi-lexical dependencies are used, but in later work Bikel (2004) ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
The use of lexical dependencies in parse disambiguation is not a new idea. For example, it is one of the key ideas behind the Collins (1999) parser. In that parser, bi-lexical dependencies are used, but in later work Bikel (2004)

The lexical nature of syntactic ambiguity resolution

by Maryellen C Macdonald, Neal J Pearlmutter, Mark S Seidenberg - Psychological Review , 1994
"... Ambiguity resolution is a central problem in language comprehension. Lexical and syntactic ambiguities are standardly assumed to involve different types of knowledge representations and be resolved by different mechanisms. An alternative account is provided in which both types of ambiguity derive fr ..."
Abstract - Cited by 556 (23 self) - Add to MetaCart
Ambiguity resolution is a central problem in language comprehension. Lexical and syntactic ambiguities are standardly assumed to involve different types of knowledge representations and be resolved by different mechanisms. An alternative account is provided in which both types of ambiguity derive

WordNet: An on-line lexical database

by George A. Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, Katherine Miller - International Journal of Lexicography , 1990
"... WordNet is an on-line lexical reference system whose design is inspired by current ..."
Abstract - Cited by 1945 (9 self) - Add to MetaCart
WordNet is an on-line lexical reference system whose design is inspired by current

Semantic similarity based on corpus statistics and lexical taxonomy

by Jay J. Jiang, David W. Conrath - Proc of 10th International Conference on Research in Computational Linguistics, ROCLING’97 , 1997
"... This paper presents a new approach for measuring semantic similarity/distance between words and concepts. It combines a lexical taxonomy structure with corpus statistical information so that the semantic distance between nodes in the semantic space constructed by the taxonomy can be better quantifie ..."
Abstract - Cited by 852 (0 self) - Add to MetaCart
This paper presents a new approach for measuring semantic similarity/distance between words and concepts. It combines a lexical taxonomy structure with corpus statistical information so that the semantic distance between nodes in the semantic space constructed by the taxonomy can be better
Next 10 →
Results 1 - 10 of 120,099
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University