While many major conferences in Hong Kong are being cancelled due to the ongoing protests, academia’s natural language processing (NLP) community gathered in the city for the latest instalment of the world’s largest NLP conference, EMNLP.
EMNLP, short for Empirical Methods in Natural Language Processing, is an annual conference that hosts researchers from across the globe. Attendees and presenters gather to learn about the latest developments spanning the breadth of NLP, including machine translation.
Along with the regular cohort of researchers from top academic institutions, the conference also attracts significant attention from big tech — Google, Facebook, Baidu, Apple, Salesforce, eBay, Cisco and Amazon were all among the conference’s sponsors. For them, EMNLP is not only an opportunity to share their own latest NLP research findings, but also to actively scout talent from among the conference attendees.
Each year, in the run up to the conference, researchers are asked to submit papers for the conference reviewers’ consideration. Typically, around a quarter or so of all papers submitted are accepted into the conference in some form. EMNLP is a high-stakes, high-brow gathering that gives researchers the opportunity to showcase their latest findings to a roomful of their equally NLP-minded peers.
The Hong Kong edition of EMNLP — called EMNLP-IJCNLP 2019 (IJC standing for International Joint Conference) — was held on November 5–7, 2019 and featured 465 long papers, 218 short papers, and 44 demo papers. There were just shy of 3,000 submissions, 37% more than in 2018.
At the closing ceremony held on November 9, 2019, EMNLP announced the winners of the four awards up for grabs: Best Paper, Best Paper Runner-Up, Best Demo Paper, and Best Resource Paper.
The Best Paper Award went to Xiang Lisa Li and Jason Eisner from John Hopkins University for their paper on Specializing Word Embeddings (for Parsing) by Information Bottleneck. The Runners-Up for Best Paper Award were John Hewitt and Percy Liang from Stanford University for Designing and Interpreting Probes with Control Tasks.
The Best Demo Paper Award was won by a group of researchers from the Allen Institute for Artificial Intelligence and the University of California, Irvine for their paper on AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models.
The Best Resource Paper Award went to a group of machine translation researchers for their work exploring low-resource languages. The paper, entitled “The FLORES Evaluation Datasets for Low-Resource Machine Translation: Nepali–English and Sinhala–English,” was co-authored by a group of eight researchers from Facebook Applied Machine Learning, Facebook AI Research (FAIR), Sorbonne Universités, and Johns Hopkins University. The paper’s co-authors are Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc’Aurelio Ranzato.
Low-resource languages are a hot topic in machine translation research, and are of a particular preoccupation for big tech companies such as Facebook, Microsoft, and Alibaba. These companies have their own motivations for seeking to understand (and generate) content in languages that are typically difficult for US- or China-centric corporates to penetrate in meaningful ways.
Congrats to the #EMNLP2019 best paper award winners! 3/4
— emnlp2019 (@emnlp2019) November 9, 2019
= Best Resource Paper Award =
The FLORES Evaluation Datasets for Low-Resource Machine Translation: Nepali–English and Sinhala–English
Guzmán, Chen, Ott, Pino, Lample, Koehn, Chaudhary, Ranzatohttps://t.co/gyjvdy1UT7
The awards were decided by three separate committees: one for Best Paper (and Runner-Up), another for Best Resource Paper, and one for Best Demo Paper. Initial nominations for the awards came from the 1,700 reviewers, 152 area chairs, and 18 senior area chairs, who put forward a shortlist of five candidates for Best Paper and another five for Best Resource paper.