New research attempting to quantify the potential impact of large language models (LLMs) on various jobs finds that translation and interpreting are among those most likely to be affected.
In the March 20, 2023 paper, GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models, University of Pennsylvania’s Daniel Rock and OpenAI’s Tyna Eloundou, Pamela Mishkin, and Sam Manning (also affiliated with OpenResearch) created a new rubric to measure the overlap of work behind different professions and the capabilities of Generative Pre-trained Transformer (GPT) models.
“Our research serves to measure what is technically feasible now, but necessarily will miss the evolving impact potential of the LLMs over time,” the authors acknowledged.
Human annotators and GPT-4 itself applied the researchers’ rubric to data from the US Bureau of Labor Statistics on 1,016 occupations. Their task: to determine whether GPT or GPT-powered systems could reduce by at least 50% the time required for a human to complete a work-related task.
A profession’s “exposure percentage” is the share of its tasks that could be impacted by GPTs in this manner. The higher the exposure percentage, the more vulnerable (in theory) the occupation, and its practitioners, are to being impacted by GPTs in their current state.
The report found that about 19% of all US workers might see at least 50% of their tasks impacted by GPT and other LLMs.
According to the paper, occupations requiring science and critical thinking skills were less likely to be impacted by the LLMs available today, while those that rely on programming and language skills were considered more susceptible to being influenced by LLMs. Jobs with higher barriers to entry also tended to track with greater exposure to LLMs.
Athletes, bus mechanics, and short order cooks were a few of the occupations on the (comparatively) short list of occupations without any GPT-exposed tasks.
How did translators and interpreters (T&Is) stack up? According to human annotators, T&Is are in one of the most vulnerable professions, with 76.5% exposure to GPTs and 82.4% exposure to GPT-powered software.
Practically speaking, the authors explained, the results do not necessarily mean that T&Is’ tasks can be fully automated; rather, the profession is simply among “those where we estimate that GPTs and GPT-powered software are able to save workers a significant amount of time completing a large share of their tasks.”
Of course, this finding is nothing new to the language industry, which has been trailblazing the adoption of the expert-in-the-loop model for years. Translated CEO Marco Trombetti, for example, suggested the metric of “time-to-edit” to track productivity gained by leveraging AI.
Other determining factors in how widely used GPT and GPT-related software will be used for specific professions include the cost and flexibility of the technology; worker and company preferences; incentives; and the level of confidence humans place in them.
“One possibility is that time savings and seamless application will hold greater importance than quality improvement for the majority of tasks,” the researchers wrote.
T&Is did not crack the top 5 vulnerable professions as ranked by GPT-4, though. The model instead named Mathematicians and Tax Preparers as the most susceptible to GPT’s influence, with 100% of the occupation’s tasks exposed to both GPT and GPT-powered software.