In its US patent application entitled “Neural Machine Translation Systems With Rare Word Processing” filed October 2015 and published on April 28, 2016, Google applied for exclusive rights to a neural MT system made specific by how that system is implemented.
Google’s patent application defines a neural MT system as “one that includes any neural network that maps a source natural language sentence in one natural language to a target sentence in a different natural language.” The patent application revolves mainly around the “how” of its implementation, however, which comprises 16 out of the 22 claims (i.e., where the claim begins with the words “The method.”)
We previously covered what Google has been doing as far as deep learning goes, from which neural machine translation derives and which, of course, differs from current statistical MT models.
“The application of deep neural networks for machine translation is a leap forward in bridging the quality gap between morphologically rich and complex languages,” Tony O’Dowd, Founder and Chief Architect of cloud-based MT provider KantanMT, told Slator.
“This has the potential to impact quality for such languages as German, Russian, and Finnish that, to date, have proven challenging for more traditional phrase-based approaches,” O’Dowd explained.
“I hope they patented it to defend NMT from lucrative patent trollers, leaving researchers and industry players free to use it”―Marco Trombetti, Translated Founder and CEO
So great is the potential impact that, according to Daniel Marcu, Founder of FairTradeTranslation.com and former SDL Chief Science Officer, most researchers are now moving “from advancing the state of the art in a traditional, statistical MT framework to a neural MT framework.”
Marcu told us that although the best translation results today are the outputs of “statistical engines that incorporate neural components, pure neural MT engines are not far from producing state-of-the-art results. Within a few years, neural MT will become the overwhelmingly dominant paradigm.”
Google is not alone in big tech’s shift to neural MT. Speaking at an MIT Technology Review conference, Facebook Engineering Director Alan Packer commented how neural networks can produce more natural-sounding translations compared to statistical machine translation.
As for the Google patent application, “Google has been contributing enormously to the evolution of NLP (natural language processing),” said Marco Trombetti, Founder and CEO of Rome-based translation agency Translated, adding, “I hope they patented it to defend NMT (neural MT) from lucrative patent trollers, leaving researchers and industry players free to use it.”
“Getting statistical MT to work required hundreds of innovations. Neural MT is no different”―Daniel Marcu, Language Weaver co-Founder
Trombetti showed us a forum where one comment read, “I think just about any big company has lawyers saying ‘patent everything you can, since you need a big patent portfolio for defensive purposes.’ Point is, just because Google is filing this patent doesn’t mean they intend to stop others from using this approach. A link to a patent application isn’t enough context to know.”
“I am sure the industry will love to hear a public statement from Google that clarifies that,” Trombetti quipped.
Asked about their patent application, a source at Google told us they typically do not comment on patents.
Yet another industry expert to whom Slator reached out concurred with the forum poster, explaining that patents do not, generally, limit what the rest of the world can do in a field, pointing out that even current statistical MT was granted a patent.
As Language Weaver co-Founder Marcu noted, “There are hundreds of statistical-MT-related patents. Getting statistical MT to work required hundreds of innovations. Neural MT is no different. It will require hundreds of innovations and patents to get MT to the next level of accuracy.”
Editor’s Note: Hat tip to Translated’s Marco Trombetti for bringing this topic to our attention