• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 2,027
Next 10 →

Correspondence Measures for MT Evaluation

by Larsahrenb Magnus Merkel, Lars Ahrenberg, Magnus Merkel , 2000
"... The paper presents a descriptive model for measuring the salient traits and tendencies of a translation as compared with the source text. We present some results from applying the model to the texts of the Linkping Translation Corpus (LTC) that have been produced by different kinds of translation ai ..."
Abstract - Add to MetaCart
aids, and discuss its application to MT evaluation.

METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments

by Alon Lavie, Abhaya Agarwal , 2005
"... Meteor is an automatic metric for Machine Translation evaluation which has been demonstrated to have high levels of correlation with human judgments of translation quality, significantly outperforming the more commonly used Bleu metric. It is one of several automatic metrics used in this year’s shar ..."
Abstract - Cited by 246 (8 self) - Add to MetaCart
shared task within the ACL WMT-07 workshop. This paper recaps the technical details underlying the metric and describes recent improvements in the metric. The latest release includes improved metric parameters and extends the metric to support evaluation of MT output in Spanish, French and German

PORTAGE in the NIST 2009 MT Evaluation

by George Foster, Boxing Chen, Eric Joanis, Howard Johnson, Samuel Larkin , 2009
"... This report describes experiments performed during preparations for the NIST 2009 MT evaluation, where we participated in the con-strained Chinese-English condition with PORTAGE. The aim is to publicize new findings about the optimum configuration, and to sug- ..."
Abstract - Add to MetaCart
This report describes experiments performed during preparations for the NIST 2009 MT evaluation, where we participated in the con-strained Chinese-English condition with PORTAGE. The aim is to publicize new findings about the optimum configuration, and to sug-

Error classification for mt evaluation

by Mary A. Flanagan - In Proceedings of 1st Conference of the Association for Machine Translation in the Americas (AMTA , 1994
"... A classification system for errors in machine translation (MT) output is presented. Translation errors are assigned to categories to provide a systematic basis for comparing the translations produced by competing MT systems. The classification system is designed for use by potential MT users, rather ..."
Abstract - Cited by 17 (0 self) - Add to MetaCart
A classification system for errors in machine translation (MT) output is presented. Translation errors are assigned to categories to provide a systematic basis for comparing the translations produced by competing MT systems. The classification system is designed for use by potential MT users

A Linguistically Motivated MT Evaluation System Based on SVM Regression

by Muyun Yang, Shuqi Sun, Jufeng Li, Sheng Li, Zhao Tiejun
"... This paper describes the automatic MT evaluation ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
This paper describes the automatic MT evaluation

An Introduction to MT Evaluation

by Eduard Hovy, Maghi King, Andrei Popescu-belis
"... ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
Abstract not found

Fully Automatic Semantic MT Evaluation

by Chi-kiu Lo, Karthik Tumuluru, Dekai Wu - Proceedings of the 7th Wrokshop on Statistical Machine Translation , 2012
"... We introduce the first fully automatic, fully seman-tic frame based MT evaluation metric, MEANT, that outperforms all other commonly used auto-matic metrics in correlating with human judgment on translation adequacy. Recent work on HMEANT, which is a human metric, indicates that machine translation ..."
Abstract - Cited by 6 (1 self) - Add to MetaCart
We introduce the first fully automatic, fully seman-tic frame based MT evaluation metric, MEANT, that outperforms all other commonly used auto-matic metrics in correlating with human judgment on translation adequacy. Recent work on HMEANT, which is a human metric, indicates that machine translation

Metrics for MT evaluation: Evaluating reordering

by Alexandra Birch, Miles Osborne, Phil Blunsom - Machine Translation
"... Abstract. Translating between dissimilar languages requires an account of the use of divergent word orders when expressing the same semantic content. Reordering poses a serious problem for statistical machine translation systems and has generated a considerable body of research aimed at meeting its ..."
Abstract - Cited by 9 (1 self) - Add to MetaCart
challenges. Direct evaluation of reordering requires automatic metrics that explicitly measure the quality of word order choices in translations. Current metrics, such as BLEU, only evaluate re-ordering indirectly. We analyse the ability of current metrics to capture reordering performance. We then introduce

Contextual bitext-derived paraphrases in automatic MT evaluation

by Karolina Owczarzak, Declan Groves, Josef Van, Genabith Andy Way - In Proceedings of the HLT-NAACL 2006 Workshop on Statistical Machine Translation , 2006
"... In this paper we present a novel method for deriving paraphrases during automatic MT evaluation using only the source and reference texts, which are necessary for the evaluation, and word and phrase alignment software. Using target language paraphrases produced through word and phrase alignment a nu ..."
Abstract - Cited by 22 (0 self) - Add to MetaCart
In this paper we present a novel method for deriving paraphrases during automatic MT evaluation using only the source and reference texts, which are necessary for the evaluation, and word and phrase alignment software. Using target language paraphrases produced through word and phrase alignment a

A study of translation edit rate with targeted human annotation

by Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, John Makhoul - In Proceedings of Association for Machine Translation in the Americas , 2006
"... We examine a new, intuitive measure for evaluating machine-translation output that avoids the knowledge intensiveness of more meaning-based approaches, and the labor-intensiveness of human judgments. Translation Edit Rate (TER) measures the amount of editing that a human would have to perform to cha ..."
Abstract - Cited by 583 (9 self) - Add to MetaCart
We examine a new, intuitive measure for evaluating machine-translation output that avoids the knowledge intensiveness of more meaning-based approaches, and the labor-intensiveness of human judgments. Translation Edit Rate (TER) measures the amount of editing that a human would have to perform
Next 10 →
Results 1 - 10 of 2,027
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University