Behind the scenes at hospitals, pharmacies, and doctors’ surgeries, there is a whole industry dedicated to developing new and improved drugs for people. The trillion-dollar pharma industry plays a crucial role in researching, creating, and distributing drugs, with the aim of bringing breakthrough pharmaceuticals to market.
Peel back yet another layer and you will find a thriving ecosystem of companies that support the likes of Pfizer, Roche, and GSK in the drug development process. These contract research organizations (CROs), as they are called, provide services such as clinical research, project management, regulatory review, and oftentimes language services to major pharmaceutical companies around the world.
ICON plc is a leading CRO with its own internal LSP. ICON’s Adelina Lear spoke to the SlatorCon London audience in May 2019 about the role of language services in the drug development lifecycle. Lear is Manager of Language Services, which is part of Patient Centered Services, the division of ICON that focuses on commercialization and outcomes.
Lear told the SlatorCon audience that there are many points during the drug development process when the need for language services surfaces. Aside from document translation, services such as transcription and voice-overs are commonly required for focus groups and hearings; while linguistic validation is a key element of clinical outcome assessments (COAs) and quality of life materials, Lear said.
Documents for translation can relate to anything from patient recruitment and materials to COAs. During the research phase, translation may be required for searches of existing literature and articles in other languages. Lear explained, “In the clinical research service area, you have serious adverse event reports (SAEs), which are going to need to be translated.”
A large part of the translation volume that Lear’s team handles comes from patient-centered services such as clinical trials. “In a clinical study, you are trying to get patients enrolled across different sites and different countries. That’s where we see the most translation.” Lear said.
Echoing the words of Roche’s Claudine Nick, who said during SlatorCon Zurich 2018 that language services are very often the “last link in the chain,” Lear admitted that translation “is quite often forgotten in clinical studies until the last minute,” despite there being such a frequent need for it.
Accuracy and Cultural Relevance Through Linguistic Validation
COAs are compiled in the form of a written questionnaire, which is sent out to patients (PROs), clinicians (ClinROs), observers (ObsROs), and performers (PerfOs), who are usually spread across many different countries. Questionnaires sent out to patients may be designed to assess the person’s ability to perform a certain task or sentiment toward a particular scenario, for example.
This is where linguistic validation comes in, Lear said: “What we want to do with linguistic validation is to make sure that the translations are conceptually but also culturally equivalent. It’s important because these are going to different sites and different countries, and when they all come back and the data is collected, we want the data to be equivalent.”
“Verbatim translations can be really dangerous and particularly in COAs” — Adelina Lear, Manager, Language Services, ICON plc.
According to Lear, some questions can be problematic. Consider, for example, asking a patient whether they are able to cut up meat with a knife and fork. “Maybe they don’t eat meat in other countries or maybe they don’t use a knife and fork,” Lear said. As a rule of thumb, she cautioned, it is important to bear in mind that “verbatim translations can be really dangerous and particularly in COAs.” For metaphors, she added, it’s a case of ensuring that “the metaphor is conveyed properly, accurately, that it’s cultural and makes sense for that target audience.”
To generate translations that are as culturally relevant and accurate as possible, Lear said there are “a lot more steps” for COAs than for other less sensitive documents.
Lear outlined the added steps of the linguistic validation workflow, which includes: a conceptual analysis of the source questionnaire; one or two forward translations plus an additional step where you find the “best of both [translations] during a reconciliation stage”; one or two back translations; cognitive debriefing (a small pilot test during which you ask a specific subset of patients whether they understand and can rephrase the text); quality control and a final proofreading.
The final translated materials are then sent to the client along with a linguistic validation certificate. Linguistic validation has what must be one of the language industry’s most complex and thorough translation production workflows.
Asked how long a linguistic validation workflow typically takes, Lear said the timeline can vary depending on the languages, author requirements, and the particular area involved. “If the therapeutic area is very rare, the cognitive debriefing part could take longer,” she said, adding that “your standard [linguistic validation workflow] would be around about eight weeks to include the cognitive debriefing, and that would be pretty good going. That could go on up to 12 weeks, maybe longer.”
Deadlines are generally defined by outside factors such as regulatory requirements, Lear said; which means “deadlines are really pressured and we’ve got submission timelines to meet.”
Lear believes that it is the role of the LSP to understand the impact of regulations governing the drug development process and to ensure the appropriate steps are taken. “If it’s going for a label claim, for example, you can’t drop the cognitive debriefing; you need to have that done,” she said.
Is there a place for using AI and machine translation (MT) in the materials Lear oversees? Lear said she does not think that “we can just step away completely and not have any human input at all. But maybe it can aid us and maybe certain file types are more suited to that.” Meaning that linguistic validation workflows would be lower down on the MT-suitability scale.
SlatorCon London 2019 Presentation (Adelina Lear, ICON plc)