Tech Solution for Scaling Human Evaluation of MT Quality with MQM and Adequacy-Fluency

Logo of Translation Quality Management SaaS Content Quo

Vendor-agnostic, purpose-built software platform for human quality evaluation and analysis of MT output quality at scale

Tallinn, ESTONIA, June 4, 2021 — As Google has indicated in their recent research paper covered by Slator, trained linguists applying proven approaches like Multidimensional Quality Metrics (MQM) for evaluation of Machine Translation engine output uncover a substantially different, much more accurate vision of MT quality, as opposed to crowdsourced, non-expert evaluation approaches.

Machine Translation is nowadays ubiquitous: it is accessible, cheap or free and, in many language combinations, the results look impressive. However, failure to reliably detect critical errors early and assess their impact on global content production may have disastrous consequences for the company’s customer relationships, brand perception, and content cost & time-to-market. That is why regular, methodical Machine Translation quality evaluation with human linguists has become a critical part of MT deployment and operations at top corporations, governments, and LSPs. Making this crucial process efficient and fast at scale, however, can be impossible if a team relies on makeshift solutions like spreadsheets.

ContentQuo’s Translation Quality Management SaaS is used by top Machine Translation teams at corporate localization departments, government translation agencies, and Global Top-100 Language Service Providers to efficiently assess the quality of Machine Translation output from multiple MT models with professional linguists using both MQM-based methods and other methods, in order to get human-centric, proactive insights into quality of MT before it’s deployed to production and/or published to end users, readers, players, or post-editors.

ContentQuo for MT is the only purpose-built, supplier-independent, enterprise-grade technology solution for human Linguistic Quality Evaluation that supports both the MQM methodology used by Google researchers (also known as Error Annotation) and cheaper & faster methodologies like Adequacy-Fluency (also known as Rating Scale) in a single environment. It offers unprecedented flexibility in customizing the evaluation methodologies to meet diverse needs and use cases, incl. categorization, scoring, and many other aspects.

Its vendor-agnostic nature means that MT teams are free to use any combination of in-house linguists, freelance linguists, and Language Service Providers that can offer strong expertise in evaluating MT output. This enables teams to build hybrid supply chains and meet tight budget & schedule constraints, while keeping 360-degree visibility of their MT quality and maintaining control over the methodology and process (which is essential for delivering reliable insights at scale).

“MT evaluation has considerably gained relevance in CPSL in the last few years, as we needed to see exactly where the output failed to be able to continue delivering our top quality standards.” said Salvador Jiménez, Language Quality Specialist at CPSL Language Services, a leading multilingual business solutions provider based in Spain. “Thanks to ContentQuo’s smooth user-experience, automation features and customisability, added to the expertise of our best linguist professionals, we have gained very valuable insight much more efficiently than we would have with manual Error Annotation processes.”

It’s well understood that MT model performance varies widely across language pairs, domains, and content types. Customizable analytics & reports built into ContentQuo enable Machine Translation teams to understand their MT engine quality and how it changes over time across their entire pool of MT models. These deep insights then help MT teams derive best strategies for engine re-training, pre-editing, and post-editing in order to maximize the positive impact and minimize quality risk from Machine Translation usage.

ContentQuo also offers advanced features that help make human MT quality evaluations more objective, such as the ability to independently assess the same translation by multiple linguists or the support for multiple Edit Distance metrics on mock Post-Editing tasks. For advanced analysis and visualization capabilities, it’s easy to connect ContentQuo as a data source (via REST API or CSV export) to Business Intelligence platforms such as PowerBI, Tableau, and Qlik. Companies with sophisticated internal tech platforms have the option to integrate ContentQuo into their MT operations via our REST API.

CPSL and other ContentQuo customers report up to 90% less manual overhead spent on managing multiple MT evaluation tasks & human evaluators, significantly reducing staffing costs for MT programs and increasing capacity of existing MT staff. Customers also enjoy up to 500% faster time-to-insights into the quality of their MT engines, allowing them to make better business decisions (like quoting with full rate vs post-editing rate) within 3-4 hours as opposed to 1-2 weeks.

“Human evaluation is valuable indeed but it is nor fast or cheap if you outsource it to trained professionals, which is what we do at CPSL. In this sense, moving from spreadsheets to ContentQuo has been a big leap for us.”, said Lucía Guerrero, Machine Translation Specialist at CPSL. “The holistic evaluation profile, with adequacy/fluency scores per segment, allows us to obtain an overall quality score of a machine-translated sample, while other profiles allow us to compare the raw MT with the postedited version and identify and categorize the errors. All our professional evaluators prefer ContentQuo over spreadsheets, and our customers benefit from the speed with which we can send our proposals and scalability helps us keep evaluation costs within budget.”

To learn how ContentQuo’s technology can help your organization streamline your human MT quality evaluation process, please schedule an introductory call through our website

For further information, please contact:

Vendor-agnostic MT quality evaluation platform

Kirill SOLOVIEV CEO & Co-Founder, ContentQuo
Phone: +372 5361 6727 (GMT+3, EET/EEST)

Expert MT quality evaluation services in any language pair

Muntsa Cuchí
Global Sales Director, CPSL
Phone: +34 93 445 17 63 (GMT+2, CET/CEST)