Microsoft’s Christian Federmann on the Translation Quality of Large Language Models

SlatorPod #161 - Christian Federmann on Microsoft Translator

In this week’s SlatorPod, we are joined by Christian Federmann, Principal Research Manager at Microsoft, where he works on machine translation (MT) evaluation and language expansion.

Christian recounts his journey from working at the German Research Center for Artificial Intelligence under the guidance of AI pioneer Hans Uszkoreit to joining Microsoft and building out Microsoft Translator.

He shares how Microsoft Translator evolved from using statistical MT to neural MT and why they opted for the Marian framework.

Christian expands on Microsoft’s push into large language models (LLMs) and how his team is now experimenting with NMT and LLM machine translation systems. He then explores how LLMs translate and the role of various prompts in the process.

Christian discusses the key metrics historically and currently used to evaluate machine translation. He also unpacks the findings from a recent research paper he co-authored investigating the applicability of LLMs for automated assessment of translation quality.

Subscribe on YoutubeApple PodcastsSpotifyGoogle Podcasts, and elsewhere

Christian describes how Microsoft’s custom translator fine-tunes and improves the user’s MT model through customer-specific data, which degrades more general domain performance. 

He shares Microsoft’s approach to expanding its support for languages with the recent addition of 13 African languages. Collaboration with language communities is an integral step in improving the quality of the translation models

To round off, Christian believes that the hype around LLMs may hit a wall within the next six months, as people realize the limitations of what they can achieve. However, in a year or two, there will be better solutions available, including LLM-enhanced machine translation.