Can Large Language Models Do Simultaneous Machine Translation?
Monash University researchers show that large language models can do real-time machine translation and propose new ways for model fine-tuning.
SlatorCon Zurich is now SOLD OUT — See you on October 4th!
Monash University researchers show that large language models can do real-time machine translation and propose new ways for model fine-tuning.
G/O Media fires Gizmodo’s entire staff of Spanish writers and begins publishing raw machine-translated versions of English articles that are now being indexed by Google Search.
Apostroph’s apoLAB engineers explain the potential of custom language models for simplifying workflows and discuss better machine learning strategies and data security.
Q3 2023 is a time of contrasts, with highs on the technology and professional sides of the language services industry, and lows on the financial side for publicly traded LSPs.
Boğaziçi University researchers reveal the potential of customizing MT systems to replicate a translator's style, leading to literary translations that mirror the translator's unique style.
A paper discussing the use of digital tools in academia addresses machine translation as both a problematic and beneficial tool, proposing increased awareness and clear policies for its use.
The Post-Edit Me! project supports educators in developing innovative post-editing training practices, fostering enhanced post-editing skills for the future of the industry.
MT gets institutionalized; AI is artificial but AI fatigue and FOMOAI are real; interpreters are supported in their right to strike; translator training is revisited, tech expertise is not a priority.
The Imminent Report 2023 is a compendium of “Word Wide Wisdom,” captured in thought-provoking articles from authors as diverse as astrophysicists and language technologists.
Speakers at the ATA MT and AI virtual conference coincided in a message urging attendees to embrace the changes AI brings to the way translators and interpreters work.
Researchers from John Hopkins University explore the role of domain and local coherence in in-context machine translation revealing improvements in quality.
A group of researchers found that large language models produce hallucinations when machine translating ‘in the wild’ that are different from traditional models.
Researchers test BLOOM’s capabilities for producing good quality machine translation. They find that training the large language model makes a big difference for all language pairs.
Google launches a new dataset and benchmark to address the lack of region-awareness in machine translation (MT) systems and support under-resourced dialects.
Meta AI researchers experimented with dictionary data prompting on known LLMs in order to improve MT. Results look promising for rare words and domain transfer.
Slator Weekly: Join over 15,800 subscribers and get the latest language industry intelligence every Friday
Tool Box: Join 10,000 subscribers for your monthly linguist technology update.
Your information will not be shared with third parties. No Spam.
This will close in 0 seconds