After one researcher recently asked (and answered) the question of whether post-editing can influence target language content, two other researchers have tackled the matter of predicting post-editing time by understanding the styles of post-editing.
António Góis, Research Scientist at Unbabel, and André Martins, Unbabel’s Head of Research, authored a paper entitled “Translator2Vec: Understanding and Representing Human Post-Editors” that was published on arXiv on July 24, 2019 by the European Association for Machine Translation.
The paper points to prior research on the effectiveness of post-editors, looking at a number of topics, among them: the relationship between pauses and cognitive effort, the use of novice versus professional post-editors for research purposes, and the impact of post-editor behavior such as planning ahead and mouse vis-à-vis keyboard use on overall performance.
The purpose of the Unbabel paper was to build on such work and find out whether it is possible to identify a specific post-editor based on their actions, whether meaningful representations of post-editors could be built that would allow researchers to draw useful conclusions, and, ultimately, whether these representations could prove useful in predicting the time needed to post-edit a document.
The researchers started from the premise that “the combination of machines and humans for translation is effective.” And then they reference previous studies showing that humans are more productive when post-editing machine translation rather than translating from scratch.
“Understanding how human post-editors work could open the door to the design of better interfaces, smarter allocation of human translators to content, and automatic post-editing”
Moreover, they hypothesized, gaining an understanding of how humans perform the task of post-editing and which methods are most effective can help make the human-machine interaction in post-editing even more successful.
In practical terms, “understanding how human post-editors work could open the door to the design of better interfaces, smarter allocation of human translators to content, and automatic post-editing,” they posited.
Identifying “Good” Post-Editors
The study relied on a dataset of more than 66,000 source documents and involved more than 300 post-editors working from English into French and German. The source documents for translation were customer service email messages sent to Unbabel’s translation service. According to the researchers, the dataset was “the largest of the kind released to date” and “the only one we are aware of with document-level information.”
The researchers looked at common post-editing operations such as inserting, deleting, and replacing a word or block of words and also took into account keystrokes, mouse actions, and waiting times. From the way these operations, or “action sequences,” were carried out by individuals, they hoped to identify specific post-editors — and do so more reliably than they would by simply comparing machine translated text with post-edited text.
The researchers aimed to “understand which activity patterns characterize ‘good’ editors” in terms of translation quality and speed
Although identification of post-editors was an important part of the study, researchers were “not interested in the problem of editor identification per se, but only as a means to obtain good representations.”
For them, “good representations” were those that managed to group similar post-editors together in clusters. By interpreting these clusters, researchers wanted to “understand which activity patterns characterize ‘good’ editors” in terms of translation quality and speed.
Untapped Source of Information
The key findings of the study were threefold: first, “that action sequences can be used to perform accurate editor identification”; second, “that they can be used to learn human post-editor vector representations that cluster together similar editors”; third and crucially, “editor representations can be very effective for predicting human post-editing time.”
Slator contacted André Martins, co-author of the paper, for additional comment on the research. Martins explained that being able to predict the time someone will take to post-edit content can give useful in the context of matching linguists to a particular text type. Moreover, according to Martins, “it may also be used to inform customers about how long we expect a document to be translated.”
“Human post-editors who spend longer times reading before starting to type, tend to type fast and to always edit left to right. By contrast, those who type immediately tend to spend some time jumping back and forth.” — André Martins, Head of Research, Unbabel
Related to predicting editing time is the quality aspect. Martins said, “We are currently looking at ways to use this information for human translation quality estimation (i.e., predicting how good a translation is before sending it to the customer). This will allow us to detect eventual translation mistakes and re-assign the task to another human translator.”
Understanding post-editing strategies also makes it possible to “design our interfaces to better promote those behaviors,” Martins added. He said one behavioral insight that surfaced during the study was that “human post-editors who spend longer times reading before starting to type, tend to type fast and to always edit left to right. By contrast, those who type immediately tend to spend some time jumping back and forth.”
Overall, the results demonstrate that the process of post-editing contains “precious information unavailable in the initial plus final translated document,” the authors wrote. They concluded that the post-editing process “is a rich and untapped source of information,” and it is the researchers’ hope that “the dataset we release can foster further research in this area.”
Post-editing productivity goes to the heart of Unbabel‘s business and operational model, of course. The company is one of the language industry’s most well-funded startups and in the spring of 2019 hired key researchers from Amazon Translate.