The Chartered Institute of Linguists (CIOL) is the latest organization of language professionals to take a public stance on AI’s impact — both potential and already observed — on the translation industry.
The European Council of Literary Translators’ Associations (CEATL, in French) published a statement On November 15, 2023, partly in response to ongoing negotiations related to the legal terms of the proposed European AI Act. CEATL’s statement highlighted issues related to the use of copyrighted material for commercial use and transparency requirements for AI companies.
Stateside, the American Translators Association (ATA) issued its own statement on November 8, 2023, which seemed to aim for a balance between the goals of its corporate and individual members. While ATA acknowledged “automated translation” as a helpful tool for human translators, it also suggested “appropriate disclaimers” about the possible shortcomings of AI-generated translations, as well as oversight by consulting language professionals.
Unlike statements by the CEATL and ATA, CIOL’s white paper acknowledged a half-dozen CIOL Council members, each of which contributed a brief write-up under their own name, as well as two pages representing the “initial reflections” and “major concerns” on AI of CIOL is a whole.
Chair of CIOL Council Steve Doswell opined that the “buzz and hype” around the latest tech developments feels “familiar” — a sentiment with which CIOL Council member Vasiliki Prestidge agreed: “It feels like we’ve been living in the future for the past 5-10 years.”
Doswell believes “that language practitioners have a confirmed record of resilience and successful adaptation in adapting to technological advancements. We have learned to adjust to previous tech introductions and can apply those lessons as we navigate the world of AI.”
It is a world in which language professionals are already immersed. Prestidge described her experience translating scripts for television and, as Voiceover Director, helping actors with pronunciation.
In a few years, the work has moved from onsite to a virtual recording studio, where she is now giving feedback on how AI “talks” — and, moreover, providing this feedback to an algorithm rather than another human. Prestidge feels “certain” that the trend will continue moving translators away from strictly translating and toward linguistic consulting.
Mark Robinson, the owner of language services provider Alexika Ltd, added that the title “consultant linguist […] captures the expertise and added value we bring as specialists in languages and cultures.”
Man versus Machine
The systems behind GenAI, by contrast, “literally don’t know what they’re doing,” wrote CIOL Council member Emma Gledhill, who believes CIOL should be “proactive” when it comes to AI. To Gledhill, this includes educating buyers and the wider population on the risks and costs of AI, as well as promoting high-quality translation education and training for future language professionals.
CIOL CEO John Worne elaborated: “As we move forward the ability to ‘ask’ and ‘task’ AI will become an essential skill, and aptitude in and with languages will play a crucial role in this. Generative AI is now demonstrating to us all that languages are the ultimate human ‘meta skill’, not just a means of communication.”
Leaving machines to their own devices — often considered as part of an effort to save money — made the list as one of CIOL’s top worries about AI, and dovetails with the higher rates of errors for low-resource languages and a possibly deepening digital divide among languages and communities.
“This makes it all the more important that AI isn’t used without human oversight in high stakes interpreting or translating — and this needs government, public services and regulatory attention,” the white paper stated, though CIOL stopped short of offering any specific solutions.
CIOL has partnered with University of Bristol researcher Lucas Nunes Vieira to study the use of MT in non-academic settings outside of the language industry, and how MT might contribute to issues such as security threats and malpractice.
They expect to produce open datasets, a policy advisory report, and a book on the ethical implications of MT, and to communicate project results to policy-making and other influential bodies, including Parliament, think tanks, government agencies, and trade unions.