We Went to Google I/O to See What’s New With Google Translate

Google is pushing hard to make money in areas other than placing ads next to search results. At the company’s annual I/O developer conference in San Francisco held on May 17–19 it showed off new machine-learning tools to power what Google wants to be – an “Artificial Intelligence first” company.

Even though Google Translate specifically didn’t get huge shoutouts during the keynotes or dedicated developer sessions, the importance of the platform was infused throughout many of Google’s major and more modest reveals.

Going Big with Chips

One of the stars, offered with great fanfare to the assembled developers who each forked over USD 900 to attend the sun-soaked event at the Shoreline Amphitheatre, was a custom AI chip built by Google specifically to power its cloud learning architecture.

Called Cloud TPUs, the hardware had already been in testing by Google to accelerate its AI platform through the use of neural language models. TPU stands for Tensor Processing Unit and the chips are Google’s version of GPUs (graphic processing unit), which are the high-powered processors heavily pushed by Nvidia that used to run neural machine translation.

Google will soon be renting out usage of TPUs through its Cloud Platform to give more developers access to machine learning and position itself as a harbinger of more cognitively aware computing.

But for Google itself, the heavy investment may pay off in how translation can be used in its products. For example, the company unveiled a new feature called Google Lens, which infuses a smartphone camera with artificial intelligence.

Demos included pointing the phone at a sign in another language, which rendered an instant translation. This feature is not new, as it has existed in Google Goggles for years. But the additional capabilities in Lens, such as image recognition, point to Google seeking a wider use and adoption. 

When introducing Google Lens, Google CEO Sundar Pichai said that training its network to achieve better recognition of language and objects is critical to its plans.

“All of Google was built because we started understanding text and web pages. So the fact that computers can understand images and videos has profound implications for our core mission,” he said.


There were several developer sessions devoted to TensorFlow, a programming platform that Google and third-party developers use for advanced machine learning models. The key takeaway for Google Translate efforts is that its use of more complex neural networks significantly enhance the potential calculations and learning process.

Google’s Brennan Saeta and the two other presenters at one of the sessions focused mostly on the technical details of TensorFlow. The session took place in one of the giant, temporary domes that Google trucked into the parking lot of the Shoreline Amphitheater.

“We want to switch the research mindset from scarcity to abundance,” he said.

Asked by Slator, a Google spokesperson offered additional details about the growth of Google Translate, but wouldn’t share more specifics beyond recent research about the transition to neural translation. According to Google, the shift from the previous model to neural translation occurred in nine months instead of the anticipated three years, and is now up to 41 language pairs.

While the immediate buzz around the NMT switch has abated by now, machine translation continues to be a core technology for the search giant as it is realigning the business towards artificial intelligence-powered services.