Why Everyone Is Using Subtitles Now with David Orrego-Carmona

SlatorPod #155 - University of Warwick’s David Orrego-Carmona on Subtitling

In this week’s SlatorPod, we are joined by David Orrego-Carmona, Assistant Professor at the University of Warwick to discuss his research on language technologies, audiovisual translation, and users of translation.

David shares his background studying translation in Colombia and how it led to the formation of a research group focusing on audiovisual translation and subtitling. He outlines how he is developing Translation Studies at Warwick, not only to teach students about translation and culture, but so they can have a direct link to the industry. 

David reveals the key findings from his PhD on the production of interlingual subtitles, where he used eye-tracking to track reading behaviors of non-professional and professional subtitles. He talks about how multilingual content like Netflix’s Sense8 and 1899 is changing the perception of subtitling.

David challenges the idea of the invisibility of subtitles as users between the age of 18 and 24 in the UK are more likely to use intralingual subtitles in English.

David gives his thoughts on the age-old debate of subtitling versus dubbing, where there is no right answer as both modes of translation are efficient and can convey meaning. He talks about how non-professional translators are implementing machine translation in a more informed and educated way through pre-editing.

Subscribe on YoutubeApple PodcastsSpotifyGoogle Podcasts, and elsewhere

David discusses why it’s important for students to learn about the requirements of different media in subtitling, including short-form content like YouTube and TikTok. He touches on the impact of ChatGPT on academia, from plagiarism to integrating large language models into the curriculum.

The pod rounds off with David’s current research projects, the first on understanding how people watch subtitles and the second on how machine translation is used by local authorities, NGOs, and charities in the UK.


Florian: First tell us a little bit more about your background, like, how did you find your way into translation studies, and in particular, like, audiovisual translation, subtitling, and language tech that you focus on?

David: I studied translation in Colombia, and basically I wanted to travel. That was my main reason to study languages and I had two options. Either language teaching and I wasn’t too keen on that, or translation, so I decided to go for translation. But I didn’t have any courses or modules on audiovisual translation or subtitling at all because at the time the industry in Colombia was very underdeveloped in a way. So we started working with some friends as part of our research group, and we kind of self-trained ourselves to become subtitlers and to go deeper into audiovisual translation in general and that’s where it all started. I did have training on translation technologies, and it was really useful, it really helped me, especially in my first in house translator job. I was very happy to implement a lot of what I learned at the time. So I have been maintaining both areas in a way, throughout my career and lately, because of all these changes, they have been converging quite a lot, and I’m very happy to place myself in that intersection.

Florian: You said you self-formed that group and you self-taught in that group, or just tell me a bit more about that. That sounds super intriguing.

David: There was this research group on translation studies, and then we created like a study group on audiovisual translation. There’s actually a paper that came out from that just exploring the situation of audiovisual translation in Colombia at the time. So we did some interviews and we talked to people who were starting to do things on audiovisual translation. Now it’s very different. Now people actually study technologies and subtitling in the program. But for us, it was really us working and teaching ourselves with some books that we found on subtitling and audiovisual translation and having discussions, running like small experiments comparing subtitling for a film club as well. So, yeah, we taught ourselves.

Florian: Interesting, so now you’re at the University of Warwick. So tell me a bit more about the program and then also I want to know what’s the typical profile of a student who joins your program, your current program, but maybe first a bit more about the program and then what’s the profile of the student who joins?

David: I joined Warwick recently. I joined in September, and it was like a conscious decision of the university to bring in someone to integrate audiovisual translation and translation technologies. The program as such is a very traditional UK program at the moment, integrating translation and cultural studies within a language department, so it’s different from other institutions in that sense. So at the graduate level, we are right now developing a new pathway in translation studies to offer our students the possibility to strengthen those direct links with the language industries, so I’m actually developing some of these programs. At the graduate level, we mostly have UK students, so that’s British students with European languages, so we teach French, German, Spanish and Italian at undergraduate level. Then the Masters is different. In the Masters, traditionally the focus has been on translation, cultures and research, that’s intersectioned, so really training people in those areas. But now we’re actually trying to diversify our portfolio. So we are developing a suite of modules on translation technologies, audiovisual translation, to kind of cater for the industry and for those students who want to study translation and cultures but also have this direct link with the industry.

Florian: In terms of the profile, yeah, I’m just always curious like in 2023, what type of student joins a translation program? In your case, maybe, I don’t know, is it related to the whole streaming boom or when you go into subtitling? Just curious about the profile?

David: The students for the Masters is mostly… We have mostly Chinese students and then some European students, and they are interested in our case literary translation. But the module I just launched on subtitling is one of the most popular ones actually. Most of our students are actually taking that module. So we are seeing that the students are also becoming more and more aware of the relevance of audiovisual translation and the possibilities that they have to work with this. So we want to offer those areas for those possibilities. Our students right now are mostly students with a background in languages and I think that’s also, in a sense, a reflection of the type of traditional program that we have right now.

Florian: Interesting, so if I get this right, your PhD was on the reception of non-professional subtitles using eye-tracking. Fascinating if I get this right. So tell us a bit more about that and some of the key findings and also how is your research evolved since then? What other kind of topics are you researching?

David: It was on the reception of non-professional subtitling because I wanted to… At the time there was still little research on non-professional subtitling or fansubbing, as some people might be more familiar with the term, and my idea was to look at the ecosystem of non-professional subtitling. To understand how the communities producing the subtitles come to be and what mechanisms they use for the production and how the users of the subtitles react to them and how much information they gather and how happy they are with the subtitles. So on the one hand, I think I learnt a lot from this study about how technology influences the emergence of these communities and how these communities actually make a very proficient use of technologies and innovate a lot to create their own, develop their own production processes. And then on the reception side, I was talking to viewers to see how they use subtitles and then I was using eye-tracking to see whether there is any difference in reading behaviors when they are reading professional subtitles or non-professional subtitles. At the time, this was in 2013, 2015. So it was just the beginning of this, really grew of the streaming services and Netflix and all that. So it was very interesting to see at the time how people were more conscious about the use of dubbing and subtitles. So they were basically developing a more conscious approach to how to engage with these different modalities. So people would say, oh well, if I really want to relax… This was in Spain, by the way, so a traditionally dubbing country. So these people would say, if I really want to relax, if I don’t want to make any additional efforts, then I just use the dubbed version. Or if I want to learn the language or to kind of go more into detail with the product or be more focused on the product, then I use the subtitles, so that was one of the findings. It was also interesting to see that people were at the time aware of differences between these subtitles. So they knew that the subtitles that they were accessing through these platforms were not the same as the professional subtitles they would normally get on a DVD or at the cinema. And they understood or they adjusted their demands to these subtitles differently as well and that more conscious approach to consumption was very interesting and I think we’re seeing more and more of that and we’ll get to that in the conversation. And from the eye-tracking data, actually found that the differences depending on the type of subtitles were not that relevant really. People were reading them in the same way, but there were other things that were really triggering more attention, such as misspellings or low frequency words. And these are some of the things we’re studying now, how different variables affect the engagement of the viewers and reading behavior when it comes to subtitling.

Florian: Is that still a big thing, fansubbing or fansubs? Or has that maybe now with technology being so prevalent, it’s harder to say, generally is it still a big thing, I guess?

David: It’s changing a lot again because it went from being this very niche thing in 90s and the beginning of this century to them becoming really a huge thing ten years ago and now it’s kind of dying out. I think at least the market for American TV series or films because now we have all these subtitles straight away by the producer. So what many groups are doing is just ripping the subtitles and redistributing them. There are still hardcore communities that create different subtitles and different types of subtitles and these are mostly communities that are really committed, really the hardcore fans who want to engage with the content differently. So learn more about the source culture or really help with the distribution of underground product. But I think their activities are definitely reducing due to all these changes in the market.

Florian: The past five years personally, of course, I’m watching Netflix, got Disney+, Apple, etc. Recently, I guess past 12, 18 months, we’ve had some kind of hit shows like Squid Game coming in from Korea that the whole world watched, or 1899, where like people were speaking a lot of different languages, which probably made it a little harder to dub, or at least it made it kind of lost the point if you dubbed it. And so do these shows, these kind of multilingual hit shows, have they changed the perception of subtitling in academia, in the industry? Or are you seeing any changes emanating from those types of titles?

David: Yes, absolutely, and the thing is, people are talking more about subtitling and translation. I think that’s one of the main takeaways from us for all these changes. Netflix has been experimenting with this for a while. There was this… 1899, is a clear example, but there was Sense8 before. There was an attempt to bring characters with different languages and different backgrounds into the same product and then use translation to kind of communicate and integrate translation as an essential part of that product, which is the same case for 1899. And people are responding well to that, I think. And partly because of these products in particular, but also, as you mentioned, Squid Game or Dark, which are products that are understood as foreign products and force people to access them through subtitling. And I think that’s the barrier that we’re now breaking, that people are consciously approaching subtitling as an essential part of their entertainment strategies. And I think this has also had an impact on how they see subtitling because in the case of Squid Game or Dark, the product by default is understood as a foreign product. So people understand that what they’re watching is further removed in relation to them in comparison to whatever they’re watching initially. And that challenges some of the ideas that we have in traditional studies about this invisibility of the subtitles. I think because of all these changes, people are more willing to understand that subtitles need to be there and that they are part of the product and that definitely changes the way how we engage and produce subtitles. Something interesting here that we should take into consideration is that people are using more intralingual subtitles, so subtitles in the same language. There was a recent poll in the UK, for instance, and I think the stats say that something like two-thirds of the users between 18 and 24 years in the UK use subtitles in English for English products. Yeah, it’s amazing. I mean, the reach of the subtitling industry in that sense is incredible and then the next group, I think was between 25 and 50 is about one-third of the population, so we can see that this is definitely being driven by younger audiences. What’s interesting here is that the subtitles that they are using are the subtitles that cater for the deaf and hard of hearing audiences, so they are designed in a different way. And then we have the clash that we found with the Squid Game debate that the subtitles are not saying the same as the original. Well, if the subtitles are prepared for a different audience, they cater for the needs of that audience in particular. So if a hearing viewer use closed captions, they will have the access to both the source and the target translation in the same space and they would be able to contrast it. But then they were not the ideal viewer for that product, so that creates or gives them a different way to assess the product. And then customer satisfaction here could suffer because, well, they’re using a product that is unintended to them. And for us, I think the question is how do we try to balance those needs and how do we try to create subtitles that are open subtitles for the whole population and that cater for everyone? And I think that’s going to be one of the interesting challenges that we have in the future.

Florian: Really tough though, right, because for me, I love subtitles. I watch almost all of my content… I mean, I watch it in English, right, so I don’t watch the dubbed version, but I always have the subtitles on. But I don’t need the dog barks in a distance type of… I’m not sure what the technical term is, so if I had a choice between kind of closed captions for the hard of hearing and like just a clean subtitle just because sometimes I just like to read it and be sure that I understand all the dialogue, I would totally do that. I’m sure I’m not alone. And as you just pointed out, that with younger folks, a lot of people would kind of consciously turn on the subtitles, so do you think it’s going to… You say you want to have a subtitle that works for both, but maybe just have both options?

David: Then you can start offering options. But then do we offer the same option to the 25 year old that we offer to the 50 year old? And then that becomes part of the problem because how do we classify these people? And we’re also, in particular, with intralingual subtitling, we have the problem that people have the capacity to access both. We are conducting a research project right now and we’re interviewing people about their use of subtitles, and they say, well, if I can access the source language, then I will compare it. But if I’m watching something in Korean or in Russian, then I’m happy to just follow the subtitles and assume that they are fine. So we are in this situation where language knowledge is actually becoming an issue for people to enjoy the content because they automatically will compare the subtitles or will automatically try to use whatever knowledge they have to assess the quality. Well, if they don’t know anything, then they are happy to go along and then just believe, fully trust the subtitles. So this classification of viewers, I think, is going to be one of the challenges that we have. What do we want to offer people and how will people react to whatever we have to offer?

Florian: It’s very hard if I don’t have subtitles. Just yesterday, I watched Chris Rock’s new special, and I got about halfway in, and because I think it was live taped, they didn’t actually have, I mean, there were no subtitles because I think it just came out a couple of days ago. Nobody actually wrote the subtitles yet, so it was like… I think it was AI captions and it was always very slow, so you couldn’t really follow it, so I turned it off and I’m like, yeah, it’s comedy, he speaks very slowly, very deliberately, but sometimes I’m like, I’m missing my subtitles. Anyway, you see I’m a fan. Let’s go to dubs versus subs, the big debate. And you mentioned Spain before, and you said in Spain, people, if they want to relax, they would turn on the dubs, if they kind of want to have the original, they have the subs on. I grew up on dub as well. In Switzerland a lot of German content, of course, from Germany, and they dub even in the cinema, which they don’t do here in Switzerland. So I’m kind of used to dubs, but I transitioned into subs. So what’s your stance, I guess if there’s a stance to be had or what’s the debate there currently?

David: I think we’re never going to get rid of this one because people will always be trying to assess which one is better, trying to find an answer. Straightforward answer is there’s no answer because there are many factors that affect the way how you engage with content and familiarity and habituation is a huge one. So obviously, traditionally we used to divide the countries like a dubbing country or a subtitling country, but that doesn’t apply anymore either because with Netflix and all these platforms, you can access many different resources. I mean, you have the choice. We were talking about choice before. You do have more choices. So I think we are going to get to a point where people just make an informed decision among the offers that they have, what they are offered in general. But in terms of translation, both modes or all modes of translation are efficient and can convey meaning. It’s a matter of how willing the users are to engage with these forms of translation, I think.

Florian: Just another brief example. I know people don’t join us from my takes, but the last week I watched with the kids, I watched a show that was about kind of the healthcare system, they followed some nurses around and it was dubbed into Swiss German because it was recorded in the French-speaking part of Switzerland. It was lip synced, dubbed into Swiss German, which I’ve never seen before. That was the first time in my life I saw anything dubbed into Swiss German, which is kind of a weird Germanic dialect, like Southern Germanic dialect and it was really easy to follow. I liked it a lot, actually more than like standard German dubbed content, maybe because it was even closer to like home and it was really well done. So I guess my point here is that it is an emotional thing for me. It was very close, right, and it was very well done, so I kind of lost… It’s like the Spanish example you mentioned like there was zero effort for me. I could just kind of consume it. So, yeah, there’s a place for dubs even for, I guess, a dubbing critic like me.

David: We are seeing changes like this. If I may add something, in Latin America, for instance, until 2010 most paid TV channels would have subtitled versions because not many people had access to these channels, so their community was very small. Their community of users was very small. As they started to expand in Latin America to reach more homes, then what they did was dubbing all the content. So suddenly, series that used to be available only with subtitles were dubbed and right now it’s harder to find subtitles on these TV channels and at the cinema in Latin America than it is to find dubbed versions. So for those people, dubbing is actually the better alternative or they are more interested in accessing that and companies understood that and decided to invest more. So I think we will continue to have both or different forms. In Eastern Europe we have voiceover and some communities enjoy the voiceover for films, exactly.

Florian: Let’s talk about a bit about machine translation. You recently published a long paper in the Revista Tradumàtica, I think it’s called, about the use of machine translation among non-professional translators. Can you just share kind of the top two, three, four takeaways from that paper for us?

David: I think it was interesting to see again how language knowledge affects the way how you engage with these resources. Many of the respondents to the survey… The article is about trying to bring together the views of different respondents about how they use machine translation in different professions, not just apart from translation. And what I found was that people make a conscious use of machine translation and try to overcome their own language barriers or their basic level of proficiency by implementing machine translation in a more informed and educated way. So it’s not just randomly approaching machine translation and assuming that the outcome is going to be perfect, but understanding that they will need to do or engage with text, so that was interesting. Something interesting that I found also was that people were kind of pre-editing themselves. So they understood that some language constructions in Spanish or Portuguese would be more cumbersome, like relative clauses and all this, so they were writing originals with simpler sentences so that it would be easier for the machine to produce a better translation. So that idea that people are reassessing their own production based on what the system will do later on, I think that was quite interesting in terms of how human machine cooperate. But I think the main takeaway from me is that this type of new tools and now with ChatGPT and Bing Chat, what we’re seeing is that the systems require for people to have a high understanding of how the systems work in order to access and assess the information that they get, and I think that’s going to be the challenge. How do the systems or how are the systems implemented? Because the systems are more and more designed to provide a frictionless experience. So instead of going through all the list of possible answers that you get on Google search, you go to ChatGPT and you get one answer. So once you have one answer, you don’t have that opportunity to engage and challenge or that invitation to engage and challenge and compare different outputs. Bing Chat is a little bit different in that sense, but then we end up in a situation where the way how the systems work kind of predefined the outcome that you’re going to have as a user and you might not be aware of that. And I think that’s happening with machine translation and it’s easy to see how that can be transferred to this broader discussion that we’re having about AI in society in general.

Florian: When you talk to maybe former graduates or people that are just entering kind of the industry after having graduated. Like, what do you see the impact of all this automation, expert in the loop translation on kind of the day-to-day working conditions of professional translators? What are you hearing from some former graduates?

David: I’ve been teaching at Masters level for five or six years, so my graduates are relatively recent to the industry, but most of the people I’ve trained, they have gone on to become project managers and then freelance translators after that, and many of them implement technologies to their own benefit. They really understand that by implementing this technology they will have more opportunities to get jobs and to choose the type of jobs that they want to do. So I think at least amongst this younger generation of professionals, there’s a more positive attitude towards what’s happening. I’m also talking about the UK and I think that’s a very different market in terms of how it’s configured, it’s really understood as an industry and people look at it in a very proactive way. I think that we as a society or as a part of the sector, we are very proactive in that sense and that’s good for the changes that I’m seeing now. It’s very different from when we talk about other people or a general attitude towards MT.

Florian: It’s been around for a while, so I think there is a whole cohort and generation that should have a positive attitude. I’m glad to hear that. Let’s talk about the different platforms. We spoke before about more kind of the feature, Netflix, the big hit shows. How much is the YouTube subtitle, the TikTok caption, relevant to what you do? Is that part of the conversation or is it mostly around big budget productions?

David: It is part of the conversation and the module I designed recently on subtitling takes into account the situation in the industry because in my view, the big grow in terms of content is going to be this type of channel, so YouTube, TikTok, Instagram. So it’s important to take into account, I think, the basic principles that subtitling requires and then discuss with students how these are going to be affected by the requirements of different media. So what we do and what I try to embed in my training is translation in general as problem solving. So you have the situation, you have less space to integrate text, but people are more engaged. So you have your phone in your hand, you can easily… All your attention is there, so the timing of the subtitles, the synchronization can be different, you can have a little bit more text. And I try to integrate that into my teaching to show students that the principle remains the same and you just need to ask the right questions and I try to equip them with the tools to think about that situation and asking the right questions to try to come up with solutions for those situations.

Florian: What does a professional think about this? Like, I think it’s mostly on TikTok or maybe some YouTube shorts or reels or whatever it’s called, these like one or two word captions or subtitles where you’re like just literally one word, but like super fast. I don’t know what the appropriate term there would be, but I find it sometimes a little too much that it’s just one word. Give me at least two or three.

David: I think you have very little time to actually consolidate whatever you are reading and I think that’s actually detrimental for comprehension. It would be better to have smaller words on the screen and that’s my view and that’s, I think, what I’ve seen based on my tracking research. I think there was actually a study to test this one word at a time type of information on smartwatches and people cannot really consolidate the information. So it’s harder to actually understand whatever you are reading because the context is just too minimal.

Florian: Since this is kind of happening, I guess, not because somebody likes to do it, but because it gets more views or likes or whatever, retweets, why do you think this has happened at all? I mean, is it more viral? Is it more kind of attention grabbing? Or is it literally the 10 seconds thing and nobody’s consuming this more than like 20 or 30 seconds? Or why do you think this kind of evolved at all? These, like, one word things.

David: I think it’s a strategy to use the subtitles as attention grabbers for the product rather than the content as such. I think it’s a way of engaging with the viewer because think about everything you watch on Instagram and how much you actually remember from that is very little. So the idea is to make sure that you have the consumer in front of the screen for those 10 seconds and then they will move on. I think that’s the main point. Some of those will go viral. I think that’s related to the wider context of media consumption rather than the subtitles as such and also they look flashy and they look more interesting. I think they make products more interesting and that’s something that we are discussing now. How these integrated subtitles or creative subtitles actually also help with the engagement of viewers.

Florian: Let’s talk about ChatGPT. In academia, I guess you have a few more challenges than us in the industry side because, I don’t know, like students coming in, hey, here’s my essay, it was written by ChatGPT. Tell me on the first… No language industry at all, but how is that being perceived by people at universities that have to grade papers and have to read essays and kind of content from students? And then on the industry side, are there some early kind of applications you’re seeing out there in subtitling or not yet?

David: University first. Yes, it’s been very problematic because having the possibility or giving students the possibility to just write an essay in a couple of minutes without really engaging the content makes it really difficult for us to do our jobs. And the universities have invested a lot on academic integrity programs and plagiarizing and all this and suddenly from a day to another that was all gone and then we need to focus on this new problem. So there are issues at different levels here and I think the assessment issue is the one that we are more interested in right now because it’s something new and we need to assess how to integrate it. And I think one of the main problems here is the social responsibility of this type of technologies. Think about other university committees that have been created since December to address the issues of ChatGPT and how are we going to try to identify this? And if we do identify this, to bring forward a case for plagiarism in a situation like this takes huge amount of manpower so it’s going to be a couple of academics involved. Yes, so I think the main problem with that part is that suddenly we have this tool and then it becomes a societal problem. Who’s going to tackle the issues that are generated by this? In the short term, I think we need to develop new modes of assessment that integrate… I’m a believer that we need to integrate this into our forms of assessment. So we try to develop types of assessment in which students can use and critically use these tools and then reflect on their practices, develop different projects over time, so that might require more oversight and additional work for us. But at the same time I think the integration of ChatGPT can help us to think or rethink teaching in general because we are in a situation where we rely on assessment as the only method to evaluate our students and that happens at the end of the module. So we don’t really get the opportunity to see how the feedback we provide helps students improve. So we could also use ChatGPT or these type of resources to try to give students additional feedback or additional sources for them to access feedback themselves and improve their work. So I think there are opportunities there as well to rethink teaching more broadly and then in the long term also considering how our programs in general can integrate ChatGPT or this type of resources or embed them more thoroughly in training because it’s going to be the reality. This system is going to be integrated into word processors and email and all this. I think it’s important here to develop though, to consider how we’re going to develop the basic skills and that’s not only a problem for academia but for post-editing in general. If we are all expected to post-edit, yes, I can train post editors now, but how to ensure that they are good translators before they become post-editors? How do we create those conditions under which students still have the time to become good translators before they become post-editors? Because the skills needed are related but are not the same and you need to be especially thinking about considering newer machine translation. You need to be a really good translator to identify the bigger problems in machine translation systems.

Florian: That’s what I always say. I say basically, when we talk to investors, et cetera, that are looking at the industry, I always say at this point, you need very good translators to be able to kind of add that additional expert layer on top of a good, well-trained machine translation. So I think the low hanging fruits are all long gone and everybody who’s still doing this is a very highly trained professional. Let’s talk about another OpenAI product, Whisper, in various shapes of open sourcing, I guess. They just released an API. Are you seeing this at all in subtitling or live captioning at the moment? Is it relevant? Are you looking at this? Because it’s quite good.

David: It is, and I think it’s very relevant. It’s very important for us to keep an eye on that, both for the consequences or the impact on training, but also the professional landscape in general. Because the ability to or the need to respeak the product in order to create live subtitles developed over the last ten years or so and became quite the standard to create live subtitles. But then if systems like Whisper become more widely available, and now that we have the API as well, it’s going to be integrated in all types of platforms and apps. We need to reassess whether there is still a need to train re-speakers, for instance. Is there a market for this or does it make more sense to use a system and then we’re going to train post-editors for the output of systems like Whisper? So we definitely need to keep an eye on that and I think these systems as well, the new ChatGPT or Whisper will also change radically the way how we assess those outputs because one of the issues that we’ve been having, for instance, having problems with rephrasing. So in subtitling, you need to condensate information, but these systems, if you combine them, they can actually help, or my assumption is that they could help quite a lot in these processes. So even those skills that we think were essentially human skills can actually be automated to some extent thanks to these activities and that will change the professional landscape and the training landscape for sure.

Florian: In what way do you think translators, subtitlers do need… have some basic coding skills in 2023? Like, at Warwick are you teaching, are you offering like a Python module, kind of Python 101? Or is it kind of a mandatory part of the curriculum, or what are your thoughts around that?

David: We don’t offer it and I’m still in two minds about this one. I think knowing Python programming skills would be an asset to some translators, and I think that’s… or some language professionals. Let’s talk about language professionals in general, because we train multifaceted professionals that will occupy different spaces at different companies or in the market. I think there are actually skills which are more relevant to them than coding because we have teams developing these tools, but we do need to provide our students with some understanding of how it works. Because I think the main problem is not so much to be able to do the task yourself, but to communicate with different professionals who can enter and produce the resources that are going to be used by translators. At the same time we are integrating or when you think about the translation curriculum, you need to think about so many different aspects that could play a role in the professional life of a translator. Think about multilingual copyrighting, SEO, project management, or something as basic as negotiation skills. I would actually put those higher than coding right now. So I don’t think everyone should know Python. I think it could be an asset to some, but I think there are many opportunities for language professionals to evolve in different ways. I think the main recommendation for me would be for these students to check or try different things and then specialize in different areas or in an area that they really feel passionate about, because I don’t think everyone will like to be a programmer.

Florian: I completely agree. I think it’s maybe even overloading it a little bit if it’s compulsory, because it’s hard enough to become really good at two or three or four languages, right? And as these systems get better and better because of people who know Python and love it or other programming languages, the language professional needs to get better and better and better to still kind of add that layer on top. So I guess there’s only so much time you have during an MA or a BA. All right, let’s talk a bit about your roadmap. What’s on the roadmap for this year? Research project, collaborations in the work, what’s planned in 2023 and beyond?

David: I’m working on an eye-tracking project right now. We are working with a team in Norwich, Warsaw and Sydney, and we are assessing subtitle reading. So we are trying to isolate different variables that affect the subtitle reading process to understand how people watch subtitles and to understand how we need to coordinate these variables. Because what we found is that in subtitling there’s a fair amount of eye-tracking research, but we know very little about the foundation of research of subtitling, or a lot of it was done in the 80s. So we’re trying to revisit this to assess how we can offer better subtitles for the audiences that we have right now. So that’s one of the things I’m doing and I’m also doing a project on machine translation as used by local authorities, NGOs and charities in the region, in the Midlands, in the UK. To see how these communities that are serving wider portions of the population use and implement machine translation and whether there is a way to develop policies or white papers that can help them better assess and integrate these resources to benefit the community.