2 years ago
April 10, 2018
Reader Polls: Denmark’s Big Bet, Microsoft’s Bold Claim, Accelerating M&A
‘Bigger or more’ has become commonplace in the world of language services. Bigger companies, more M&A deals, bigger RFPs, yet more claims in machine translation quality…In March 2018 our polls were all about bigger or more (but not necessarily better…).
In March Slator reported four acquisitions. Notable ones include Semantix buying Amesto Translation, pushing its annual revenue well past USD 120m and cementing even further its position as probably the biggest language service provider (LSP) in the Nordics. Across the Atlantic ULG bought VIA, extending its private equity funded ‘roll-up’ strategy of acquiring LSPs operating in regulated industries and also pushing its potential revenue past USD 100m for 2018.
As at press time in early April already two frequent buyers have announced deals. Technicis from France bought up TextMaster to acquire cloud technology and diversify clients, just seven months after its last deal. In their latest annual report, UK listed Keywords Studios announced three small acquisitions. With a total of 11 deals last year, Keywords held the cup for most active acquirer in our 2017 M&A report. It proudly declared a “healthy acquisition pipeline” for 2018 and has arranged a EUR 105m credit line on stand-by.
So will 2018 see the language industry consolidate even faster and further? Most of our readers thought so, with 67% saying that M&A activity will accelerate. Only 11% said it will slow down.
One Big vs Many Small
Denmark’s government is also going big for awarding language services contracts. Slator spotted a two-year, USD 86m tender issued by a collective of their public order and safety agencies. Aggregating demand like this to achieve efficiency and economies of scale by governments may backfire though, as per the experience of the UK government’s massive interpreting contract tendered out in a similar manner.
Most Slator readers seem to agree. An overwhelming 76% thought that aggregating government demand and appointing a single vendor for language services wasn’t a smart idea. Only 3% said it was…
Perhaps governments should learn from the likes of Netflix, the online streaming giant that now has 19 companies doing localization work for it. Not content with just relying on big media localizers, the company also conducted its own subtitling and translation test platform for freelancers looking to work for them. The experiment met with “rapid popularity and response” from all over the world. In 2017 Netflix had revenues of over USD 11bn and 110m subscribers worldwide.
The demand from Netflix for quality localization may still continue. Only 41% of our readers thought that their native language offerings were excellent or good. 44% thought it was poor or simply not yet available in their language. With the online movie and TV streaming business growing fast, cross-cultural content consumption will ensure healthy demand for media localization. The media localizers Slator covers – BTI Studios, Zoo Digital, SDI Media etc. – are all doing brisk business. In fact, the industry is growing so fast that talent shortage is one of its biggest problems.
Are you human?
Last but not least, bigger and bolder claims are being made by companies engaged in machine translation (MT) research. The latest salvo came from Microsoft who published a research paper titled “Achieving Human Parity on Automatic Chinese to English News Translation”. While other media picked it up and repeated the impressive “human parity” tagline, we were a little more critical.
At the heart of the debate are two key points: First, Microsoft came up with their own evaluation method to define and test for human parity. The claim could hold better acceptance if the results were from independent tests administered by non-commercial entities. Second, the Microsoft team used a specific dataset to train the model and arrive at the “human parity” results.
In data science a big problem with machine learning is ‘overfitting’ of the model to suit a particular dataset and use case. The long and short of this error is that the engine that was created would cease to work well with other real-time data or scenario variations. Given that the MT engine evaluated in this research is not yet in production use, at best the human parity claim has to be confined to an experimental, laboratory level for now.
Finally, some would argue that “human parity” is difficult to objectively measure for translation, which, as a professional activity, often requires a delicate balance between creativity and precision.
Slator readers would probably agree. 65% thought that the use of the words “Human Parity” in a MT research paper headline would be misleading. 20% thought it ‘Ok’ it the claim was more narrowly defined subsequently. Only 15% deferred to indifference or ‘Freedom of Speech’.