TY - GEN
T1 - Pushing the Right Buttons
T2 - 6th Conference on Machine Translation, WMT 2021
AU - Kanojia, Diptesh
AU - Fomicheva, Marina
AU - Ranasinghe, Tharindu
AU - Blain, Frédéric
AU - Orasan, Constantin
AU - Specia, Lucia
N1 - Copyright 2021 the authors, Creative Commons Attribution 4.0 International (CC BY 4.0)
PY - 2021/9/22
Y1 - 2021/9/22
N2 - Current Machine Translation (MT) systems achieve very good results on a growing variety of language pairs and datasets. However, they are known to produce fluent translation outputs that can contain important meaning errors, thus undermining their reliability in practice. Quality Estimation (QE) is the task of automatically assessing the performance of MT systems at test time. Thus, in order to be useful, QE systems should be able to detect such errors. However, this ability is yet to be tested in the current evaluation practices, where QE systems are assessed only in terms of their correlation with human judgements. In this work, we bridge this gap by proposing a general methodology for adversarial testing of QE for MT. First, we show that despite a high correlation with human judgements achieved by the recent SOTA, certain types of meaning errors are still problematic for QE to detect. Second, we show that on average, the ability of a given model to discriminate between meaning-preserving and meaning-altering perturbations is predictive of its overall performance, thus potentially allowing for comparing QE systems without relying on manual quality annotation.
AB - Current Machine Translation (MT) systems achieve very good results on a growing variety of language pairs and datasets. However, they are known to produce fluent translation outputs that can contain important meaning errors, thus undermining their reliability in practice. Quality Estimation (QE) is the task of automatically assessing the performance of MT systems at test time. Thus, in order to be useful, QE systems should be able to detect such errors. However, this ability is yet to be tested in the current evaluation practices, where QE systems are assessed only in terms of their correlation with human judgements. In this work, we bridge this gap by proposing a general methodology for adversarial testing of QE for MT. First, we show that despite a high correlation with human judgements achieved by the recent SOTA, certain types of meaning errors are still problematic for QE to detect. Second, we show that on average, the ability of a given model to discriminate between meaning-preserving and meaning-altering perturbations is predictive of its overall performance, thus potentially allowing for comparing QE systems without relying on manual quality annotation.
UR - http://www.scopus.com/inward/record.url?scp=85127214482&partnerID=8YFLogxK
UR - https://arxiv.org/abs/2109.10859
U2 - 10.48550/arXiv.2109.10859
DO - 10.48550/arXiv.2109.10859
M3 - Conference publication
AN - SCOPUS:85127214482
T3 - WMT 2021 - 6th Conference on Machine Translation, Proceedings
SP - 625
EP - 638
BT - WMT 2021 - 6th Conference on Machine Translation, Proceedings
PB - Association for Computational Linguistics (ACL)
Y2 - 10 November 2021 through 11 November 2021
ER -