1 year ago
November 20, 2017
MMT Brings Adaptive Neural MT to MateCat and to MyMemory Plugin for SDL Trados
The team behind the European Union-funded machine translation engine ModernMT (MMT) has announced that MMT is now available as a plugin for the translation productivity tools SDL Trados Studio and MateCat. This puts the productivity booster in the hands of tens of thousands of translators for the first time.
Marco Trombetti, CEO of Translated, one of the four organizations behind the project, said the first ever adaptive neural MT engine was released as free open source software on October 10, 2017 and machine translation developers and enthusiasts have been using it for over a month now.
Available via a pay-as-you-go subscription, the plugin allows users to merge the accuracy of their translation memories and the efficiency of neural machine translation.
The MMT team first unveiled the MateCat plugin at the School of Advanced Technologies for Translators (SATT2017) conference held in Trento, Italy in September 2017. The plugin in SDL Trados was made available to a select group of users earlier in summer 2017.
“In MateCat, translators can enable MMT when creating a project. The engine will automatically adapt to the content being translated using any translation memories provided by the users and their translated segments,” Trombetti explained. “In SDL Trados Studio, it is part of the MyMemory Pro plugin and provides the same adaptation capabilities available in MateCat.”
The integration in MateCat and in the MyMemory Pro plugin does not only provide machine translation from MMT. It automatically selects the best engine to use for any translation: Google Translate (Neural or Statistical MT), DeepL (Neural MT), or MMT (Adaptive Neural MT).
“MMT’s unique neural adaptive solution significantly improves the consistency of MT output with respect to the TM content, including the terminology,” said Marcello Federico, Head of the Machine Translation research unit at Fondazione Bruno Kessler (FBK) and CEO of MMT.
Adapting on the Fly
A research paper authored by FBK researchers M. Amin Farajian, Marco Turchi, Matteo Negri, and Federico explained that this instance-based adaptive neural MT approach effectively handles translation requests from multiple domains without any supervision.
This means that for each segment that needs to be translated, MMT retrieves similar segments from all the available data and customizes the engine on the fly, in a fraction of a second. Translators need not specify a domain.
“For enterprise users, there is less complexity to work with as they can get rid of having to manage domains and to retrain the system whenever new data becomes available,” Federico told Slator in an earlier interview.
Aside from FBK and Translated, two other organizations were behind the creation of MMT — the University of Edinburgh (UEDIN) and the Netherlands-based language industry think tank TAUS.
The four institutions built MMT’s core technology and data infrastructure from the ground up and released the next-generation neural MT engine in the fourth quarter of 2017.