Machine translation quality estimation (MTQE) is firmly in the spotlight as big tech is experimenting with ways to auto-evaluate machine translation output. The focus on MTQE is justified. Leveraging LLMs to analyze machine translation output, and to predict which segments need human review, gives language service providers (LSPs) and internal localization teams managing large content volumes the opportunity to add significant value to existing machine translation workflows.
LSPs and localization teams are faced with a growing demand for machine translation services with ever-shorter turnaround times. Understanding which documents provide better or worse machine translation output, and the effort involved to post-edit, is still a manual task for both LSPs, localization teams, and linguists.
With LLMs, assessing machine translation output automatically is now within reach, although it is not without its complications. At this year’s SlatorCon, a panel of experts discussed how LLMs are still “off-the-shelf starting points” that will need to be fine-tuned with the help of translators.
More and more researchers are analyzing how fine-tuning LLM models with human judgment data can improve the overall output of machine translation. This has triggered a wave of brand new metrics, such as multidimensional quality metrics (MQM), and new prompting techniques based on these frameworks.
Further studies have also focused on models learning from context and training data, but have fallen short of detecting errors in machine translation output, demonstrating the need for more research in MTQE.
As a result, the level of effort required to fully implement this at scale remains high. As Adam Bittlingmayer, CEO of ModelFront, told SlatorPod in April 2023, “anyone doing MTQE is on the more advanced end of the spectrum.”
Slator’s recently released Pro Guide: Translation AI provides a concise snapshot of the latest practical applications of large language models (LLMs) in translation and includes a use case on Machine Translation Quality Estimation (MTQE).
The MTQE use case is one of ten, one-page examples of LLMs being put to use, and is drawn from research and interviews with some of the industry’s leading language technology providers.