9 months ago
November 14, 2018
ContentQuo Announces Solution for Manual Evaluation of MT Quality
LSPs can now scale their Machine Translation quality programs with a flexible, vendor-agnostic solution that combines a manual quality evaluation platform with easy access to many MT engines
Tallinn, ESTONIA, Nov. 12, 2018 — ContentQuo, an Estonian technology startup serving Global Top-10 LSPs and large corporate & government translation buyers, has today announced dedicated features for linguistic evaluation of Machine Translation quality as the newest addition to its scalable, enterprise-grade Translation Quality Management SaaS platform.
Companies can now effortlessly measure and track the quality of Raw MT output through holistic, segment-level human evaluation models (such as Adequacy-Fluency), with Edit Distance metrics for Post-Edited MT launching in Q1 2019. These holistic assessments can be combined with flexible MQM-DQF analytical quality evaluations and post-editing capabilities already available in ContentQuo to enable continuous improvement of custom MT models.
“More and more mature translation vendors and buyers are turning to ContentQuo for an enterprise-wide translation quality management solution, spanning scenarios as diverse as translator testing, regular evaluations, customer escalation handling, random risk-based audits, LQA project automation, and vendor feedback, ” said Kirill Soloviev, Co-Founder and Head of Product at ContentQuo. “We are very excited to be adding holistic quality evaluation to our feature portfolio as this enables our customers to also run linguistic assessments of Machine Translation, providing them a way to manage and reduce quality risk across all of their HT, MT, and PEMT workflows from the same platform.”
ContentQuo’s newest features, combined with Intento Web Tools that allow easy access to dozens of MT engines without API integration, deliver a complete, cost-effective solution for organizations looking to evaluate, select, deploy, and continuously optimize Neural Machine Translation across a large customer & engine portfolio.
“There is no one-size-fits-all MT model, hence successful deployment of Machine Translation requires careful evaluation of all available options, “ said Konstantin Savenkov, CEO and co-founder of Intento. “Intento helps evaluate multiple MT solutions at once using reference-based scoring, but often we identify several MT models equally close to human translation. It’s up to translators and domain experts to make the final choice, and I am excited that ContentQuo supports both LQA and HTER scenarios. Together, our tools form a turnkey solution for building the MT evaluation process in either LSP or enterprise setting.”
Holistic MT quality evaluation features are available now to select ContentQuo customers and will be generally available in early December 2018. To see a live demo of using ContentQuo and Intento together, please register for our free December webinar or send an inquiry to firstname.lastname@example.org. More information on https://www.contentquo.com .
For further information, please contact:
Kirill SOLOVIEV, Co-Founder & Head of Product
Konstantin Savenkov, Co-Founder & CEO