Is Google’s Cloud-Based Natural Language Processing a Breakthrough?

Google recently unveiled what it calls a “Cloud Natural Language API,” basically a cloud-based natural language processing service. The launch was announced on July 20, 2016 on the search giant’s blog.

The service comprises entity recognition (e.g., categorizing words as names, locations, expressions), sentiment analysis (e.g., categorizing opinions as positive, negative, or neutral), and syntax analysis. We covered what Google was doing in the area of syntactic parsing back in May.

Advertisement

According to Marco Varone, “What Google is offering is not new. While they do a good job of covering Wikipedia entities, entity recognition has been around for many years.” Varone is the President and CTO of Italy-based linguistic technology company Expert System.

NLP industry analyst Seth Grimes concurs. “Google is no pioneer in introducing a new natural language processing Web service. Dozens are available, from companies ranging from IBM to innovative specialists such as Aylien, Lexalytics, MeaningCloud, and SpazioDati,” Grimes told Slator.

However, what sets Google’s new offering apart, Grimes said, is its Syntax Analysis API. It is “exposing a dependency parser that unlocks fact extraction and the ability to identify entity relationships and attributes; in essence, the ability to extract not just mentions, but also meaning,” Grimes explained.

Varone, on the other hand, said Google’s syntax analysis is comparable to existing “top open source offerings” with similar tools, adding that “speed will be an issue if it is based on SyntaxNet.”

Meanwhile, Jochen Hummel, ESTeam CEO and LT-Innovate Chairman, said, “As one would expect from Google, the Natural Language API is well engineered, straightforward to use, and pretty powerful.”

But, Hummel said, “Google plays the infamous MT card and states that it can be used in ‘multiple languages by translating text first with Translate API.’ I am not sure this is a good idea.”

The new Google service is presently limited to English, Spanish, and Japanese. As LT-Innovate Chairman, Hummel is a staunch advocate of a European Language Cloud that provides basic NLP through an API for all languages, as reported by Slator in June.

Expert System CTO Varone also noted how Google’s “sentiment analysis tool shares the same inaccuracies of the average tools on the market.”

Varone added the Internet giant “continues to be a follower in many areas,” describing this latest mission of Alphabet/Google as “clearly another step in the same direction of monetizing existing applications rather than creating something new.”

Google set the threshold to begin charging for the service at 5,000 units analyzed per month, each unit being equivalent to a so-called text record. One text record can have up to 1,000 Unicode characters.

Google will charge users USD 1 per 1,000 text records in excess of the 5,000-unit, free-use threshold for entity recognition and sentiment analysis, and USD 0.50 per 1,000 parsed units. The price drops by 50% at one million units and by another 50% at five million units.

While the pricing model can apply to many uses, said Hummel, it is “very limited on the language dimension,” as is often the case with NLP.

“Germans, Brazilians, Czechs, and billions of others express sentiments as well. But to support lesser spoken languages is, even for the Internet giant, obviously too expensive,” Hummel said.

Marion Marking

Communications specialist, veteran journalist, and online editor at Slator who dreams of driving a Veyron on the Autobahn