Embracing Disruption in the Language Services Industry

Embracing Disruption in the Language Services Industry

“Skate to where the puck is going, not to where it is” has always been good advice for ice hockey. But it is remarkably prescient for the language services industry as we prepare for the tsunami of disruption brought about by generative artificial intelligence (AI) and large language models, or LLMs.

The speed of development has been frenetic since the release of OpenAI’s ChatGPT in November 2022. Each week there are dizzying numbers of new products and startups launching that are built on LLM technology. It would be a brave forecaster to predict how the corporate landscape will look five years from now—but operating from where the puck is today risks asking the wrong questions and missing the boat.

Content creation is in the bullseye for LLM use cases, and the big players are working hard to make LLM-generated content a reality. The big three cloud providers, Azure, Google Cloud, and AWS, are building LLMs into their corporate toolkits. While content creation software, such as Microsoft 365, Adobe Creative Cloud, and Canva, are incorporating AI assistants (Copilot, Firefly, Magic Write), transforming how content is produced.

LLMs will enable a fundamental change toward creating content in any desired language. While today, the language services industry receives content in a source language to be translated; it’s plausible that in the near future, much of the content received will be in the target language(s) directly, needing to be reviewed or validated. This would be a tectonic shift for the 60-billion-dollar industry.

A further implication of the appeal of LLM-generated content is the likely explosion in overall content volume. In a recent survey of 2,600 customer service and marketing professionals conducted by Adobe, two-thirds expect demand for content to increase between 5-20 times in the next two years. If true, this presents a massive content validation challenge for authors and localization professionals.

So, the question becomes how to validate LLM-generated content to preserve the author’s intent when that intent is embodied in a prompt. Validating intent may also require a more expansive approach than pure translation. Samuel Bowman, a computer scientist at NYU, recently released a paper highlighting the current state of research on LLMs. He notes that foundation models’ inner workings are poorly understood and may produce unpredictable results that contain bias, confidential information, or outright falsehoods.

To meet the evolving landscape, future content supply chains must employ new workflows and technology, adapting to concurrent creation versus the existing linear pathways. Redesigning these workflows and the tools that support them is where language service providers can rise to the challenge.

The language service industry has more translators, computational linguists, and natural language specialists than anywhere else. We are uniquely positioned to help corporations navigate the impending disruption and realize the enormous potential benefits while protecting them against the risks. To do so requires us to envision where the puck is going and resist the urge to stay anchored to the current state.