White House Report Puts Translation Under Narrow AI, Calls Progress Remarkable

Among the most recent developments in artificial intelligence (AI) is a US federal government report released October 12, 2016. The paper, prepared by an Obama administration think tank, has the US government weighing in on the current state of AI: While there is “remarkable progress” in Narrow AI—it mentions language translation as a specific application along with playing strategic games and self-driving vehicles—General AI “will not be achieved for at least decades.”

Also called AGI (Artificial General Intelligence), General AI is defined as a system that can exhibit intelligent behavior “at least as advanced as a person across the full range of cognitive tasks.” Or, as Google would put it apropos language translation, “nearly indistinguishable from human translation.”

In the report, government experts, who peg the arrival date for AGI as ranging “from 2030 to centuries from now,” pointed out the “long history of excessive optimism” regarding AI. A footnote mentioned that early predictions on automated language translation were “wildly optimistic” as the technology is only now becoming usable “and by no means fully fluent.” (It also noted how the 1957 prediction of AI pioneer Herb Simon that computers would beat humans at chess inside a decade only happened 40 years after.)

The report referred to the “broad chasm” that divides Narrow AI from AGI and that attempts to reach AGI “by expanding Narrow AI solutions have made little headway over many decades of research.”

The belief that AGI is still decades away is shared not only by the think tank (actually, the National Science and Technology Council’s Subcommittee on Machine Learning and Artificial Intelligence) but, according to the report, by the private-sector expert community as well.

Early predictions about automated language translation proved wildly optimistic, the technology only becoming usable (and by no means fully fluent) in the last several years—US Machine Learning and AI Subcommittee

This expert community cited certain challenges in dealing with AI in its consultations with government, the most common being transfer learning. The rationale of transfer learning is to create a machine learning algorithm that “can be broadly applied (or transferred) to a range of new applications.”

As an example, experts said a system can be trained to translate from English into Spanish in such a way that the resulting system can transfer its knowledge to translating from Chinese into French; making the system learn the new task more quickly.

As it goes, this appears to be happening now with the recent progress in neural machine translation that is purely data-driven and requires no domain and linguistic knowledge to the degree of being fully language-agnostic.

What government experts appear most optimistic about based on the report is deep learning. It is in deep learning, they said, that “some of the most impressive advancements in machine learning” have occurred in recent years.

“Deep learning uses structures loosely inspired by the human brain,” the paper explained, describing how “deep learning networks typically use many layers—sometimes more than 100—and often use a large number of units at each layer, to enable the recognition of extremely complex, precise patterns in data.”

The think tank experts further said that, in recent years, “new theories of how to construct and train deep networks have emerged, as have larger, faster computer systems, enabling the use of much larger deep learning networks.”

They said “the dramatic success of these very large networks at many machine learning tasks” has surprised experts and credit it as being the main reason behind “the current wave of enthusiasm for machine learning” in the AI community.