Results 1 -
5 of
5
Joint A* CCG parsing and semantic role labelling
- In Proceedings of EMNLP
, 2015
"... Joint models of syntactic and semantic parsing have the potential to improve performance on both tasks—but to date, the best results have been achieved with pipelines. We introduce a joint model us-ing CCG, which is motivated by the close link between CCG syntax and semantics. Semantic roles are rec ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Joint models of syntactic and semantic parsing have the potential to improve performance on both tasks—but to date, the best results have been achieved with pipelines. We introduce a joint model us-ing CCG, which is motivated by the close link between CCG syntax and semantics. Semantic roles are recovered by labelling the deep dependency structures produced by the grammar. Furthermore, because CCG is lexicalized, we show it is possible to factor the parsing model over words and introduce a new A ∗ parsing algorithm— which we demonstrate is faster and more accurate than adaptive supertagging. Our joint model is the first to substantially im-prove both syntactic and semantic accu-racy over a comparable pipeline, and also achieves state-of-the-art results for a non-ensemble semantic role labelling model. 1
Potsdam: Semantic dependency parsing by bidirectional graph-tree transformations and syntactic parsing. SemEval 2014
, 2014
"... We present the Potsdam systems that par-ticipated in the semantic dependency pars-ing shared task of SemEval 2014. They are based on linguistically motivated bidi-rectional transformations between graphs and trees and on utilization of syntactic de-pendency parsing. They were entered in both the clo ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
We present the Potsdam systems that par-ticipated in the semantic dependency pars-ing shared task of SemEval 2014. They are based on linguistically motivated bidi-rectional transformations between graphs and trees and on utilization of syntactic de-pendency parsing. They were entered in both the closed track and the open track of the challenge, recording a peak average labeled F1 score of 78.60. 1
Unsupervised Parsing for Generating Surface-Based Relation Extraction Patterns
"... Finding the right features and patterns for identifying relations in natural language is one of the most pressing research ques-tions for relation extraction. In this pa-per, we compare patterns based on super-vised and unsupervised syntactic parsing and present a simple method for extract-ing surfa ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Finding the right features and patterns for identifying relations in natural language is one of the most pressing research ques-tions for relation extraction. In this pa-per, we compare patterns based on super-vised and unsupervised syntactic parsing and present a simple method for extract-ing surface patterns from a parsed training set. Results show that the use of surface-based patterns not only increases extrac-tion speed, but also improves the quality of the extracted relations. We find that, in this setting, unsupervised parsing, besides requiring less resources, compares favor-ably in terms of extraction quality. 1
Incremental Recurrent Neural Network Dependency Parser with Search-based Discriminative Training
"... We propose a discriminatively trained recurrent neural network (RNN) that predicts the actions for a fast and accurate shift-reduce dependency parser. The RNN uses its output-dependent model structure to compute hidden vectors that encode the preceding partial parse, and uses them to estimate probab ..."
Abstract
- Add to MetaCart
(Show Context)
We propose a discriminatively trained recurrent neural network (RNN) that predicts the actions for a fast and accurate shift-reduce dependency parser. The RNN uses its output-dependent model structure to compute hidden vectors that encode the preceding partial parse, and uses them to estimate probabilities of parser actions. Unlike a similar previous generative model (Henderson and Titov, 2010), the RNN is trained discriminatively to optimize a fast beam search. This beam search prunes after each shift action, so we add a correctness probability to each shift action and train this score to discriminate between correct and incorrect sequences of parser actions. We also speed up parsing time by caching computations for frequent feature combinations, including during training, giving us both faster training and a form of backoff smoothing. The resulting parser is over 35 times faster than its generative counterpart with nearly the same accuracy, producing state-of-art dependency parsing results while requiring minimal feature engineering.
A Model of Zero-Shot Learning of Spoken Language Understanding
"... When building spoken dialogue systems for a new domain, a major bottleneck is developing a spoken language understand-ing (SLU) module that handles the new domain’s terminology and semantic con-cepts. We propose a statistical SLU model that generalises to both previously unseen input words and previ ..."
Abstract
- Add to MetaCart
(Show Context)
When building spoken dialogue systems for a new domain, a major bottleneck is developing a spoken language understand-ing (SLU) module that handles the new domain’s terminology and semantic con-cepts. We propose a statistical SLU model that generalises to both previously unseen input words and previously unseen out-put classes by leveraging unlabelled data. After mapping the utterance into a vector space, the model exploits the structure of the output labels by mapping each label to a hyperplane that separates utterances with and without that label. Both these mappings are initialised with unsupervised word embeddings, so they can be com-puted even for words or concepts which were not in the SLU training data. 1