• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Multilingual Joint Parsing of Syntactic and Semantic Dependencies with a Latent Variable Model (2012)

by James Henderson, Paola Merlo, Ivan Titov, Gabriele Musillo
Add To MetaCart

Tools

Sorted by:
Results 1 - 5 of 5

Joint A* CCG parsing and semantic role labelling

by Mike Lewis, Luheng He, Luke Zettlemoyer - In Proceedings of EMNLP , 2015
"... Joint models of syntactic and semantic parsing have the potential to improve performance on both tasks—but to date, the best results have been achieved with pipelines. We introduce a joint model us-ing CCG, which is motivated by the close link between CCG syntax and semantics. Semantic roles are rec ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Joint models of syntactic and semantic parsing have the potential to improve performance on both tasks—but to date, the best results have been achieved with pipelines. We introduce a joint model us-ing CCG, which is motivated by the close link between CCG syntax and semantics. Semantic roles are recovered by labelling the deep dependency structures produced by the grammar. Furthermore, because CCG is lexicalized, we show it is possible to factor the parsing model over words and introduce a new A ∗ parsing algorithm— which we demonstrate is faster and more accurate than adaptive supertagging. Our joint model is the first to substantially im-prove both syntactic and semantic accu-racy over a comparable pipeline, and also achieves state-of-the-art results for a non-ensemble semantic role labelling model. 1
(Show Context)

Citation Context

...role labelling (SRL) has been substantially beneath that of pipelines (Sutton and McCallum, 2005; Lluís et al., 2009; Johansson, 2009; Titov et al., 2009; Naradowsky et al., 2012; Lluís et al., 2013; =-=Henderson et al., 2013-=-). In this paper, we present the first approach to break this trend, by building on the close relationship of syntax and semantics in CCG grammars to enable both (1) a simple but highly effective join...

Potsdam: Semantic dependency parsing by bidirectional graph-tree transformations and syntactic parsing. SemEval 2014

by Alexander Koller , 2014
"... We present the Potsdam systems that par-ticipated in the semantic dependency pars-ing shared task of SemEval 2014. They are based on linguistically motivated bidi-rectional transformations between graphs and trees and on utilization of syntactic de-pendency parsing. They were entered in both the clo ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
We present the Potsdam systems that par-ticipated in the semantic dependency pars-ing shared task of SemEval 2014. They are based on linguistically motivated bidi-rectional transformations between graphs and trees and on utilization of syntactic de-pendency parsing. They were entered in both the closed track and the open track of the challenge, recording a peak average labeled F1 score of 78.60. 1
(Show Context)

Citation Context

... underlying schemes. While a number of theoretical and preliminary contributions to data-driven graph parsing exist (Sagae and Tsujii, 2008; Das et al., 2010; Jones et al., 2013; Chiang et al., 2013; =-=Henderson et al., 2013-=-), our goal here is to investigate the simplest approach that can achieve competitive performance. Our starting point is the observation that the SDP graphs are relatively tree-like. On it, we build a...

Unsupervised Parsing for Generating Surface-Based Relation Extraction Patterns

by Jens Illig, Benjamin Roth, Dietrich Klakow
"... Finding the right features and patterns for identifying relations in natural language is one of the most pressing research ques-tions for relation extraction. In this pa-per, we compare patterns based on super-vised and unsupervised syntactic parsing and present a simple method for extract-ing surfa ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Finding the right features and patterns for identifying relations in natural language is one of the most pressing research ques-tions for relation extraction. In this pa-per, we compare patterns based on super-vised and unsupervised syntactic parsing and present a simple method for extract-ing surface patterns from a parsed training set. Results show that the use of surface-based patterns not only increases extrac-tion speed, but also improves the quality of the extracted relations. We find that, in this setting, unsupervised parsing, besides requiring less resources, compares favor-ably in terms of extraction quality. 1

Incremental Recurrent Neural Network Dependency Parser with Search-based Discriminative Training

by Yazdani Majid Henderson, Majid Yazdani, James Henderson
"... We propose a discriminatively trained recurrent neural network (RNN) that predicts the actions for a fast and accurate shift-reduce dependency parser. The RNN uses its output-dependent model structure to compute hidden vectors that encode the preceding partial parse, and uses them to estimate probab ..."
Abstract - Add to MetaCart
We propose a discriminatively trained recurrent neural network (RNN) that predicts the actions for a fast and accurate shift-reduce dependency parser. The RNN uses its output-dependent model structure to compute hidden vectors that encode the preceding partial parse, and uses them to estimate probabilities of parser actions. Unlike a similar previous generative model (Henderson and Titov, 2010), the RNN is trained discriminatively to optimize a fast beam search. This beam search prunes after each shift action, so we add a correctness probability to each shift action and train this score to discriminate between correct and incorrect sequences of parser actions. We also speed up parsing time by caching computations for frequent feature combinations, including during training, giving us both faster training and a form of backoff smoothing. The resulting parser is over 35 times faster than its generative counterpart with nearly the same accuracy, producing state-of-art dependency parsing results while requiring minimal feature engineering.
(Show Context)

Citation Context

... (e.g. (Socher et al., 2011; Socher et al., 2013; Collobert, 2011)), and those whose design are motivated mostly by efficient inference and decoding (e.g. (Henderson, 2003; Henderson and Titov, 2010; =-=Henderson et al., 2013-=-; Chen and Manning, 2014)). The first group of neural network parsers are all deep models, such as RNNs, which gives them the power to induce vector representations for complex linguistic structures w...

A Model of Zero-Shot Learning of Spoken Language Understanding

by Majid Yazdani, James Henderson
"... When building spoken dialogue systems for a new domain, a major bottleneck is developing a spoken language understand-ing (SLU) module that handles the new domain’s terminology and semantic con-cepts. We propose a statistical SLU model that generalises to both previously unseen input words and previ ..."
Abstract - Add to MetaCart
When building spoken dialogue systems for a new domain, a major bottleneck is developing a spoken language understand-ing (SLU) module that handles the new domain’s terminology and semantic con-cepts. We propose a statistical SLU model that generalises to both previously unseen input words and previously unseen out-put classes by leveraging unlabelled data. After mapping the utterance into a vector space, the model exploits the structure of the output labels by mapping each label to a hyperplane that separates utterances with and without that label. Both these mappings are initialised with unsupervised word embeddings, so they can be com-puted even for words or concepts which were not in the SLU training data. 1
(Show Context)

Citation Context

... we use bigrams, since they have been shown previously to be effective features for this task (Henderson et al., 2012). Following the success in transfer learning from parsing to understanding tasks (=-=Henderson et al., 2013-=-; Socher et al., 2013), we use dependency parse bigrams in our features as well. We learn to build a local representation at each word position in the utterance by using the word representation, adjac...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University