Large Language Models Beat Commercial MT Models for Arabic Dialects, Research Finds
New research shows that large language models are better translators of Arabic dialects than commercial machine translation systems but remain far from perfect.
*New* Slator Pro Guide: Scaling an LSP Key Account — Growing Small Clients into Key Accounts
New research shows that large language models are better translators of Arabic dialects than commercial machine translation systems but remain far from perfect.
Researchers from Shanghai Jiao Tong University and Tencent AI Lab introduce a method to elevate word-level auto-completion through machine translation, with experimental results showcasing noteworthy enhancements.
New research reveals the best-performing machine translation evaluation metrics, identifies major challenges in metrics development, and suggests improvements.
A study introduces an approach to streamline translation between related languages, with the goal of enhancing trade efficiency and strengthening social connections, particularly in regions with related languages.
Microsoft Azure AI researchers explore the potential of large language models for automatic post-editing and find that LLMs are good but not great at it.
Brown University researchers reveal an issue with AI safety mechanisms in large language models involving low-resource languages.
A study demonstrates the ability of large language models to remove noise from datasets and underscores their potential for data cleaning.
Carnegie Mellon University researchers explore LLM effectiveness across 204 languages revealing their output limitations for low-resource languages.
At SlatorCon Zurich, Dr. Sheila Castilho emphasizes the significance of contextual evaluation in assessing large language models and the need for a more rigorous evaluation approach.
Panelists from ServiceNow, LanguageWire, Busch Vacuum Solutions weighed the risks and opportunities of employing large language models in practical enterprise localization workflows.
The World Intellectual Property Organization unveiled its in-house solution designed to generate conference meeting transcripts and machine translations.
Monash University researchers show that large language models can do real-time machine translation and propose new ways for model fine-tuning.
Google created a new dataset for machine translation and multilingual NLP tasks across 400 languages and released a high-performing multilingual MT model trained on this data.
DeepMind introduces a new method to improve the quality of large language models. Researchers choose machine translation as a use case to show how well the new approach works.
Language AI researchers show that fine-tuning large language models with fine-grained human judgment data boosts machine translation evaluation.
Logrus Global and the University of Manchester showcase that fine-tuning LLMs on historical post-editing data can pinpoint segments that require editing and predict translation quality.
KAIST and Google DeepMind researchers propose training a single model using Unit-to-Unit Translation to achieve seamless cross-language communication.
ADAPT researchers introduce adaptNMT, an innovative open-source application designed to simplify the development and deployment of machine translation models.
MATEO aims to open up machine translation evaluation, making it accessible to more stakeholders and facilitating research, education, and critical evaluation of machine translation.
Boğaziçi University researchers reveal the potential of customizing MT systems to replicate a translator's style, leading to literary translations that mirror the translator's unique style.
Slator Weekly: Join over 15,800 subscribers and get the latest language industry intelligence every Friday
Tool Box: Join 10,000 subscribers for your monthly linguist technology update.
Your information will not be shared with third parties. No Spam.
This will close in 0 seconds