Natural Language Processing and Neural Machine Translation Figure Prominently in Amazon Research Awards

Back in mid 2017 Amazon opened its 2017 call for proposals for the third Amazon Research Awards (ARA). The ARA is one of Amazon’s ways of supporting external research that could potentially be integrated into the tech giant’s platform in the future. Offered in 11 categories—from machine learning to computer vision to multilingual language understanding—the 2017 ARA received over 800 submissions from research groups across North America and Europe.

On May 17, 2018, ARA announced the 49 groups whose projects the organization will be individually granting up to USD 80,000. In a press release, Amazon announced the 49 scientific groups from 28 institutions would collectively be awarded a total of USD 3.7m in funds for their research.

NLP and NMT Awardees

Among the 49 projects awarded, five were Natural Language Processing (NLP)-related topics and one was about Neural Machine Translation (NMT). The fact that language-related technologies are well represented among the winners is an indicator of the importance Amazon, and other large technology companies, place on such technologies.

Details of the submissions chosen by the ARA do not seem to be publicly available, but their authors’ previous work as well as prior research in the same subject matter can shed some light on the nature of their research.

For NLP, research included “Natural Language Processing for Literary Texts” by David Bamman, Assistant Professor at the University of California, Berkeley. Literary texts are a challenge for both NLP and NMT due to their creative use of language.

There were also a couple of submissions on branches of NLP such as question answering and recommenders, e.g. the systems that recommend the next movies you might want to watch based on your viewing history on Netflix. The former was titled “Question Answering and Reasoning about Product Reviews” by Hannaneh Hajishirzi, Research Assistant Professor and Adjunct Assistant Professor at University of Washington. The latter was “Building Blocks for Natural Language Recommenders” by Max Harper, Research Scientist at University of Minnesota’s Department of Computer Science and Engineering.

There was also NLP research on human-machine interaction using naturally worded commands, namely “Learning to Understand Natural Language Commands on Changing Websites” by Assistant Professor Percy Liang from Stanford University.

And finally, research into multilingual language understanding through a specific method of converting naturally-worded language into a representation that a machine can understand: “Enabling Multilingual Language Understanding: Universal Typed Semantic Parsing” by Stanford University’s Christopher Manning, a Thomas M. Siebel Professor in Machine Learning and Professor of Linguistics and Computer Science.

As for NMT, Senior Research Scientist and Assistant Research Professor Kevin Duh from John Hopkins University submitted “Multi-objective Hyperparameter Search for Fast and Accurate Neural Machine Translation.” The goal for this project is fast, accurate NMT, but Duh’s other work also tackles domain adaptation training and NMT for low-resource settings.

Amazon has been active in the NMT scene recently. Their latest big move was the official unveiling of Amazon Translate, during which they brought Lionbridge on stage with them. Even during its preview launch towards the end of 2017, Amazon proactively pitched Translate to language service providers.

Download the Slator 2019 Neural Machine Translation Report for the latest insights on the state-of-the art in neural machine translation and its deployment.