Facebook Says Statistical Machine Translation Has Reached End of Life

There is a disconnect between the languages people speak and the content they want to connect to on the Internet. This is the world according to Facebook―and what has given impetus to the social network developing its own neural network-based machine translation (MT) system.

“Of the 1.6 billion people who actively use Facebook, more than half…don’t speak English at all. Most of them don’t speak each other’s language,” said Alan Packer, Engineering Director and head of the Language Technology team at Facebook.

Advertisement

“Yet a slight majority of the content is in English and…popular, viral content or professionally produced content is skewed toward English,” Packer added, speaking in May at the MIT Technology Review-hosted EmTech Digital conference in San Francisco. Around the same time, we published a story on Google’s neural MT patent application, where we also mentioned Packer’s comments.

In that same story, language industry experts concurred with what Packer had to say next: “We believe, along with most of the research and academic community…that the current approach of statistical, phrase-based MT has kind of reached the end of its natural life.”

Although he concedes that statistical MT is capable of producing technically accurate translations, “they don’t sound like they came from a human. They’re not natural, they don’t flow well,” Packer pointed out.

Packer, whose two-year-old Language Technology team has been working on MT, speech recognition, and natural language understanding, said they are part of the applied machine learning team whose goal is to take AI (artificial intelligence) and apply it, at scale, to Facebook products.

What Facebook has built and deployed, so far, according to Alan Packer, Director of Engineering for Language Tech at Facebook

What Facebook has built and deployed, so far, according to Alan Packer, Director of Engineering for Language Tech at Facebook (Source: Alan Packer / TechnologyReview.com)

But why is Facebook building this themselves? Why not just license or use open source? (Packer disclosed to the audience that he asked Facebook this very thing when they first approached him about the job.)

Scale is actually one reason Facebook has invested in its own MT technology. According to Packer, there are more than two trillion posts and comments, which grows by over a billion each day. “Pretty clearly, we’re not going to solve this problem with a roomful or even a building-full of human translators,” he quipped, adding that to have even “a hope of solving this problem, we need AI; we need automation.”

The other reason is adaptability. “We tried that,” said Packer about using third-party MT, but it “did not work well enough for our needs.” The reason? The language of Facebook is different from what is on the rest of the Web.

Packer described Facebook language as “extremely informal. It’s full of slang, it’s very regional.” He said it is also laden with metaphors, idiomatic expressions, and is riddled with misspellings (most of them intentional). Additionally, as in the rest of the world, there is a marked difference in the way different age groups communicate on Facebook.

Neural network-based MT can, rather than do a literal translation, find the cultural equivalent in another language―Alan Packer, Engineering Director at Facebook

He explained that existing MT systems are trained using mostly academic data sets and data mined from the Internet “by looking for parallel corpora,” that is, “the same document in multiple languages on the Web.”

But this parallel data tends to come from places like government documents, conference proceedings, and user manuals. He laughs, “It’s great that I can find my dishwasher manual online so I can figure out how to get the lemon seeds out of the ‘spinny’ thing. [But] it turns out the language that’s in that dishwasher manual has very little to do with the language people are using to talk to each other on Facebook.”

Packer said they believe neural networks can learn “the underlying semantic meaning of the language,” so what is produced are translations “that sound more like they came from a person.” He said neural network-based MT can also learn idiomatic expressions and metaphors, and “rather than do a literal translation, find the cultural equivalent in another language.”

Packer said early results have been very promising and their goal is to start to deploy these neural network systems by the end of 2016.

We had previously reached out to Alan Packer and Facebook for comments on the presentation, but have not as yet received a reply.

Marion Marking

Communications specialist, veteran journalist, and online editor at Slator who dreams of driving a Veyron on the Autobahn