Results 1 -
4 of
4
A Discriminative Model for Semantics-to-String Translation
"... We present a feature-rich discriminative model for machine translation which uses an abstract semantic representation on the source side. We include our model as an additional feature in a phrase-based de-coder and we show modest gains in BLEU score in an n-best re-ranking experiment. 1 ..."
Abstract
- Add to MetaCart
(Show Context)
We present a feature-rich discriminative model for machine translation which uses an abstract semantic representation on the source side. We include our model as an additional feature in a phrase-based de-coder and we show modest gains in BLEU score in an n-best re-ranking experiment. 1
Query-Based Single Document Summarization Using an Ensemble Noisy Auto-Encoder
"... In this paper we use a deep auto-encoder for extractive query-based summarization. We experiment with different input repre-sentations in order to overcome the prob-lems stemming from sparse inputs charac-teristic to linguistic data. In particular, we propose constructing a local vocabulary for each ..."
Abstract
- Add to MetaCart
(Show Context)
In this paper we use a deep auto-encoder for extractive query-based summarization. We experiment with different input repre-sentations in order to overcome the prob-lems stemming from sparse inputs charac-teristic to linguistic data. In particular, we propose constructing a local vocabulary for each document and adding a small random noise to the input. Also, we propose us-ing inputs with added noise in an Ensem-ble Noisy Auto-Encoder (ENAE) that com-bines the top ranked sentences from mul-tiple runs on the same input with different added noise. We test our model on a pub-licly available email dataset that is specifi-cally designed for text summarization. We show that although an auto-encoder can be a quite effective summarizer, adding noise to the input and running a noisy ensemble can make improvements. 1
Broad-coverage CCG Semantic Parsing with AMR
"... We propose a grammar induction tech-nique for AMR semantic parsing. While previous grammar induction techniques were designed to re-learn a new parser for each target application, the recently anno-tated AMR Bank provides a unique op-portunity to induce a single model for un-derstanding broad-covera ..."
Abstract
- Add to MetaCart
(Show Context)
We propose a grammar induction tech-nique for AMR semantic parsing. While previous grammar induction techniques were designed to re-learn a new parser for each target application, the recently anno-tated AMR Bank provides a unique op-portunity to induce a single model for un-derstanding broad-coverage newswire text and support a wide range of applications. We present a new model that combines CCG parsing to recover compositional aspects of meaning and a factor graph to model non-compositional phenomena, such as anaphoric dependencies. Our ap-proach achieves 66.2 Smatch F1 score on the AMR bank, significantly outperform-ing the previous state of the art. 1
Better Summarization Evaluation with Word Embeddings for ROUGE
"... ROUGE is a widely adopted, automatic evaluation measure for text summariza-tion. While it has been shown to corre-late well with human judgements, it is bi-ased towards surface lexical similarities. This makes it unsuitable for the evalua-tion of abstractive summarization, or sum-maries with substan ..."
Abstract
- Add to MetaCart
(Show Context)
ROUGE is a widely adopted, automatic evaluation measure for text summariza-tion. While it has been shown to corre-late well with human judgements, it is bi-ased towards surface lexical similarities. This makes it unsuitable for the evalua-tion of abstractive summarization, or sum-maries with substantial paraphrasing. We study the effectiveness of word embed-dings to overcome this disadvantage of ROUGE. Specifically, instead of measur-ing lexical overlaps, word embeddings are used to compute the semantic similarity of the words used in summaries instead. Our experimental results show that our pro-posal is able to achieve better correlations with human judgements when measured with the Spearman and Kendall rank co-efficients. 1