October 2023 is here, and pretty much everyone in the language services industry is familiar with large language models (LLMs) translation capabilities. Open AI’s ChatGPT is said to be capable of delivering good quality machine translation (MT), but in reality, quality is inconsistent, as Slator established through a few tests that rendered subpar translations.
Some of the errors found in a bot-translated financial article included accuracy (bond yields rise when the source says they fall), incorrect terminology, and fluency.
We asked readers if they were using ChatGPT for translation, and the majority (80.6%) said No. Less than a tenth of respondents are using the bot for translation (8.6%), and the rest are split between those who are planning to use it, and those who used it and stopped using it (5.4% each).
MT High-Grade Mimetization
Large language models are becoming ubiquitous. That happened fast, but as mentioned above with the ChatGPT case, the AI version of automated translation still needs improvement to perform consistently well. However, when it comes to rankings in Google Search, the bad manages to blend with the good.
Machine translation, whether from good old engines or AI models, is also ubiquitous, and is showing up in Google searches, at times with poor-quality raw MT ranked in the top results.
It seems Google has decided to “join them (if you can’t fight them)” and is changing its own rules about machine-generated content bit by bit.
An example of these subtle but significant shifts in policy is the change from the “written by people, for people” wording to “helpful content created for people” in the Search helpful content statement. We asked readers if they had recently read MT’d content while being unaware it was indeed MT’d content, and less than a quarter (24%) said they had. The rest (76%) said they had not.
Translation’s Viral Moment
Does cutting down on time and cost also mean cutting down on quality when it comes to multilingual audiovisual production? The answer depends on who you ask.
Playing with AI to create different types of content is now the realm of regular folks, including those who used to pay different specialists to spend hours and hours creating multilingual “professional” videos and, on the other end of the spectrum, those who experiment with the technology and talk about it on social media.
Testing out @HeyGen_Official translation on French and German. I don’t speak either language so let me know if it sounds natural if you do.— Jon Finger (@mrjonfinger) September 11, 2023
I hope if you pay you can turn off the color correction.
It didn’t work on my phone so I had to upload on my pc.https://t.co/FMJp9sJEBI pic.twitter.com/iF5eONAQ3c
That was the case with Jon Finger on X, whose tweet went viral partially because he confessed he could not tell whether his HeyGen-created video in French and German was any good because he speaks neither of those languages. Regardless of mixed reviews on the quality of the German translation, posts like Finger’s are often what ends up making a difference in the long-term viability of these technology providers. It is the era of AIAV.
Some solutions are high quality and still offer a trusted team of specialists to get it all done right, while some are just easy and deliver “good enough” quality. We wanted to know what readers thought about this AIAV experimentation, specifically uploading their headshot video to create an avatar version of themselves. The majority (48.8%) thought it was creepy, a third of respondents (34.1%) said they would need to read the terms and conditions, and the rest (17.1%) embraced the idea and loved it.