Should You Watermark AI-Generated Content?

Watermark AI Content

Smartling CEO Bryan Murphy shared many interesting points (and quite a bit of wisdom about technology) as a guest on Slator on October 27, 2023. As someone with a broad business background, Murphy’s comments are particularly insightful when it comes to conversations with C-level decision-makers. He mentioned that although there has been a disconnect, things are beginning to change.

One of those insights is that digital footprint and conversion rates really matter to these executives and the conversation is shifting to how translation and localization can help improve those numbers. Although the impact of AI in the industry and its transformative potential dominate conversations, buyers of translation services are still concerned about how AI and other technologies can actually make a positive impact on fundamental business factors like costs and quality.

We asked readers if their conversations with clients had changed a lot in 2023, and the majority of responders acknowledged (43.1%) that it is all about AI now. For a little over a third (36.9%) of readers, the conversations are still happening mostly around the same issues. For the rest  (20.0%), the topics of discussion have changed a bit.

Watermark Legislation

Many governmental institutions and committees of all kinds are discussing how to legislate and regulate what cannot be contained: AI. Laws and rules might be the beginning of what can become generally accepted (and at the same time largely ignored or generally rejected) guidelines for the ethical and safe use of AI. 

In the United States, President Joe Biden issued an Executive Order (a mandate that does not require Congress or state legislature approval) in October 2023 outlining new standards for AI-generated content. The Executive Order is a general rule covering many applications of AI, specifically standards and guidelines for identifying and labeling synthetic content. This includes a watermark of synthetic content created for the US government.

We asked readers what they thought about watermarking AI-generated content, and the majority (54.4%) say it is a great idea. A little over a third of respondents (35.1%) think that if it’s possible, why not use it? To a small group (7.0%) using the watermark seems excessive, and the rest do not approve (3.5% - no way).

BIG or small

The majority of language service providers (LSPs) in the US are small businesses (94%, now that the Small Business Administration includes LSPs with annual revenues under USD 22.5m in that category, up from the previous USD 8m threshold). 

That the language industry is indeed a conglomerate of small businesses and a few super agencies does not seem to impact how demand for localization of all content types is being met. Multiple Language Vendors (MLVs) work with single language vendors and other LSPs, as well as technology providers of all sizes, to create a “localization neural network” that in the end is capable of meeting demand. Plus, like any business wanting to succeed, LSPs focus on growth and/or making themselves an attractive option for acquisition.

We asked readers if they prefer working with many small clients or a few big ones, and most (53.0%) answered that it doesn’t matter as long as the business grows. Close to a third (29.4%) do prefer to have a few big clients, and the rest (17.6%) prefer to have small clients.

My Voice: Whose Property?

German member of parliament Joana Cotar posted on X about one of her speeches being  AI-lip-sync-translated, something she was not opposed to and even qualified as “wonderful.” On the opposite end of the spectrum are artists trying to protect their voices from AI exploitation without compensation, one of the key points negotiated in the contracts that ended the 2023 Hollywood strikes.

Writers of course are also up in arms about their intellectual property being used to train models … the models that can now generate scripts for videos so easily. Many concerns are being raised (and lawsuits being filed) around AI synthesis of people’s voices, likenesses, and works. 

We asked readers what they predict is the future of third-party processing of voice and video available on the internet, and most (45.7%) think the genie is already out of the bottle. Less than a quarter (22.9%) believe tough regulation is coming. A small group (17.1%) thinks that some light touch regulations are coming, and the rest of the respondents (14.3%) think the practice will be mostly banned without explicit consent.

NO to Translatoids

The American Translators Association (ATA) and the European Council of Literary Translators’ Associations (CEATL) published statements outlining their official positions on AI and the language industry in November 2023. The ATA statement reads more like a declaration of how translators and interpreters are ahead of the curve because they have had to adapt to changes in language technology already, and are essential for guaranteeing better outcomes.

The CEATL statement focused on the views and demands of its member organizations on AI in the publishing industry. The federation of associations which also represents about 10,000 individual literary translators in 26 European countries urged members to demand opt-in clauses for the use of copyrighted materials in AI training. It also declared that  “machines are not translators but 'translatoids'.”

We asked readers if they agree with the ATA or with the CEATL Statement, and most respondents (42.9%) agree with CEATL, less than a quarter (22.9%) agree with the ATA or with both (20.0%), and for the rest (14.2%), the statements were “too long, didn’t read.”