For Users, Fluency Trumps Adequacy in Machine Translation, Study Finds

New research submitted to Arxiv.org delved into how much users trust machine translation based on fluency and adequacy of output. The paper was written by University of Maryland’s Marianna J. Martindale, PhD Student, and Marine Carpuat, Assistant Professor in Computer Science.

Based on the number of papers published on Arxiv.org, Cornell University’s automated online distribution system for research papers, it looks like neural machine translation (NMT) research continues at a busy pace, though the bustle somewhat mellowed from an all-time high on November 2017.

This new study by the two University of Maryland researchers explored uncharted territory: user trust in MT. The research ascertained how much trust people put in MT engines based on fluency and adequacy of output, specifically, in terms of these three hypotheses:

  • Good translations maintain or improve user trust
  • Bad translations (either not fluent or not adequate) erode user trust
  • Misleading translations (not adequate) erode user trust more significantly compared to output that is simply not fluent

The researchers confirmed the first two hypotheses, but found that in their test set, fluency actually trumped adequacy.

The researchers found that there was a much bigger dip in user trust when translations were not fluent compared to when translations were not adequate. Furthermore, participants retained pretty much the same level of trust in MT output when provided fluent and adequate translations again despite the drop in trust due to mistranslations immediately prior.

“Users responded strongly to disfluent translations, but were, surprisingly, much less concerned with adequacy”

Of course, the participants were more or less laymen when it comes to MT and the language industry in general. “This unfamiliarity with both human translation and machine translation fits our expectations for typical users of MT for assimilation,” the paper reads. Among 89 participants, nearly 74.2% said they were “unfamiliar with human translation” (a concept not explained in more detail) while nearly 52.8% said they were unfamiliar with MT. So their degree of trust in MT output does not necessarily reflect the trust that linguists would give their MT providers.

However, the general concern over how much people trust MT naturally holds wider implications for language service providers (LSPs) and the overall development of MT. While translation quality can be a matter of life or death in areas like the life sciences, a recent study found that a significant portion of online consumers already rely on machine translation when making buying decisions.

The research team likewise tempered their findings by citing the inherent limitations of their pilot study—it used a relatively small sample size, the tests were limited in scale and scope, and there was a possible discrepancy in comprehension among participants, among others.