Unbabel CEO Vasco Pedro on AI Impact and Scaling LangOps

SlatorPod #195 - Unbabel CEO Vasco Pedro on AI Impact and Scaling LangOps

Vasco Pedro, Co-founder and CEO of language operations platform Unbabel, joins SlatorPod to talk about the transformative year of 2023 and its impact on the language technology landscape.

Vasco discusses the AI boom in 2023, emphasizing the importance of recognizing AI as what he calls a cognitive prosthetic rather than a human replacement. He highlights the need for a new pricing model in translation and localization that accommodates AI solutions and allows for transparency, predictability, and agency for LangOps professionals.

Touching on multilingual content generation, Vasco believes that while AI is making content creation easier, there is still a gap in quality between content creation and translation.

Vasco emphasizes the importance of owning the entire value chain and reducing the complexity of translation processes to provide efficient solutions. He touches on some of Unbabel’s features, including UnbabelMT for comparing machine translation options.

Subscribe on YoutubeApple PodcastsSpotifyGoogle Podcasts, and elsewhere

The CEO outlines Unbabel’s strategic acquisitions of EVS and Bablic, with the goal of expanding into new markets, acquiring clients, and integrating Unbabel’s technology effectively.

Vasco talks about taking part in the AI consortium in Portugal’s Recovery and Resilience Plan, and how Unbabel is involved in projects such as Project Halo, which aims to develop a brain-to-computer interface for text-based communication with practical implications for conditions like ALS.

When looking towards the future, Vasco emphasizes the importance of focusing on growth and continually improving the product’s quality, speed, and cost-effectiveness.


Florian: Today we welcome back Vasco Pedro. Vasco is the Co-founder and CEO of language operations platform Unbabel. So much has been going on now in 2023. I just told you before the podcast, I’m kind of glad I’m an observer and not a builder in the space. So tell me a bit more about how this crazy year of like 2023 was for you.

Vasco: I mean, it’s been quite a ride, right? I mean, we ended up, ups and downs. We bought a couple of companies. I think we are much more focused on the right strategy on path to profitability, which for a tech startup is both a result of the times and in a good place to be. We just announced we raised some more capital. We’ve done a couple of restructures, realigned the company, like it’s been a mess, but it feels like, man, I think it’s par for the course.

Florian: Yeah, so about a couple of years ago, I think two years ago was the first time I heard about the LangOps platform, kind of a term you guys coined. Just tell me more about the concept and kind of what’s been happening there, your progress.

Vasco: There’s kind of two points that have been leading me to believe that we needed a new way of thinking about this problem, and one is a fundamental new technology that is creating a lot of disruption, which is AI, and we’re seeing that. I mean, it hasn’t started today, but I think it’s picking up steam in the way that people are going to tackle different parts of this problem. And so, as with a few other areas, whenever you have a fundamental new technology, you kind of need to start thinking about solutions in a different way, and I think it’s the case with AI. So for me, the equivalent was DevOps, where before you had in-house servers and then you had cloud computing coming along and you had the evolution of sysadmins into DevOps. And I think the same in localization, where you have localization managers evolving into LangOps. And part of that is now the need and ability to think from an AI-first perspective on how to solve the problems, doesn’t mean an AI-only, but it means an AI-first. And I think at the core of this is two changes that I think makes sense for this podcast to explain, but maybe a bit too specific for other places, which is, I think the industry is relying on two different pieces. One is project-based, so TMS, until now, have always thought from the ground up as project-based. You go in, you create a project, you do your project, project’s done. I think the world has really shifted in that sense, and is moving into a stream-based component where content is constantly being created and updated. And so you need to think more translation pipelines and how do you have streams of content being translated with a certain SLA time quality, et cetera. And the other is the localization is really built on the idea of a human translated word. For example, the pricing models for a TMS or for an LSP are typically, here’s the cost of a human word and then technology enables you to discount from that, right? So it’s a discount policy based on the base unit, which is human word. But I think the world is shifting to an AI-first way of thinking, which means that the vast majority of words in the future will be translated by AI in terms of percentage, with humans being part of the process and in some use cases being more involved. But it means that you need to have, for example, a pricing mechanism and enables you to charge for AI solutions and then keep adding stuff like, okay, well, if you’re doing this by translating with MT only, it should be much more cost effective and then if you need to have human components on top, then you can add as many components to do different things. And so the pricing model of Unbabel is much more like that, right? It’s consumption based, but with the idea that we expect a large growing number of pipelines to be AI-first, and then what we want to do is essentially provide transparency, predictability, and agency to the LangOps. Meaning the LangOps are the people at the center of the platform, they’re the ones that can orchestrate what is the best way to solve a particular problem? What is the pipeline? What are the modules? How do we go about that? I think the LangOps platform is kind of the third component of solving this problem, so I think anyone that’s going to solve this problem will need three components. One is the actual translation engine, so how do I get a word translated? What are the internal components? How does that happen? The second is the input/output problem, so how do I get the content where it is and put it back, right? So integrations, plugins, file types, et cetera. And the third is, what is the orchestration layer? How do you create a platform that enables the person that actually is tasked with solving this problem at an execution level to be able to have the operational capability to deploy different strategies, whether it’s AI or human, and verify and scale up, et cetera. And I think that maybe in the end, we’ll be called something different, but I think it does need a new name because I think localization is kind of stuck in the 20th century and we need a 21st solution to this problem that needs to be thought from the ground up with AI at its core.

SlatorPod – News, Analysis, Guests

The weekly language industry podcast. On Youtube, Apple Podcasts, Spotify, Google Podcasts, and all other major platforms.

SlatorPod – News, Analysis, Guests

Florian: 100%. I also agree that the terminology is shifting now more than I’ve ever seen it in the past 15 to 20 years. I think, especially this year, there’s so many more people coming in that don’t know these old terms or kind of historical terms, right, and it’s probably a good time to relaunch that. So that kind of brings me to 2023 again, and this kind of AI boom and large language models really capturing the broadest possible audience’s imagination. What were some of the key positives and perhaps negatives that you saw in your business from this?

Vasco: I think the main positive is awareness, right? So we started Unbabel with this idea that AI was going to have a huge impact and is going to continue to evolve and is going to have a bigger piece of the pie and that’s definitely happening. But I think until people started experimenting with LLMs and really seeing them, there was more hesitation. There was more like, oh, this AI thing? And I think now everyone’s like, yep, what’s the AI strategy, right? So, for example, one head of localization of a company, well, a big company whose name I’m not going to name, but she was telling me that she was trying to get a budget approved and went to the leadership team and said, hey, look, I’m deploying a LangOps strategy, and because of that, I’m going to rely more on AI. And that was a game changer in how budget was approved and the ability, like, yeah, we’re moving to the future. So I think that definitely that awareness and part of that is the awareness of AI, so that’s been positive. I think the negative is it’s easy to then think, okay, again, a bit like when Google Translate came on 2017 and said, problem solved, right, we figured out translation. It’s easy to do the same mistake, and people are like, oh, wait, we solved this problem now. Now it’s solved, right? And that’s really underestimating the complexity of actually solving this problem at scale in an enterprise setting that has a lot of different nuances, and it’s not as straightforward. So I think that could be the potential negative. I think when it started, when ChatGPT came out, I think us and everybody else, there was a moment of like, damn, does this mean the end of localization as an industry, right? Everyone was like, oh, wow. And I think that was an important moment for us to kind of reevaluate. Okay, does this still make sense? Where do we go from here? And I think what most people are seeing is AI is essentially a cognitive prosthetic, right? So it’s really enhancing a lot of human ability to do different cognitive tasks, but it’s so far not really a human replacement and so that is creating opportunity more than detracting. I think in any market where a fundamentally new technology comes along, you either have a large unmet need, in which case you just see an explosion. So, for example, with online accounting tools, there was such a pent up demand for accountants, was so much friction that actually the number of accountants exploded, right, like it was great. But with horses, you met the demand with cars, and suddenly, way less horses now, right? And so where are we now, right? How much of a pent up demand is there for translation? And typically, what we were saying before is, look, if you put together all of the human translators in the world, they’re not able to translate even 1% of the text that needs to be translated. Certainly when I look at, in practice, how many companies are at the level they should be to maximize their access to different markets through multilingual approaches, we’re very much in the beginning, right? Only large organizations really have sophisticated localization departments. There’s a lot of room to grow, and so if that’s the case, then AI is going to enable that explosion of translation needs, because you reduce cost, you reduce friction and increase usage. Typically, that’s the case. If we are actually at a point where we have all we need, then that will be a problem, but I don’t see that, quite the opposite. Even in, so one of our core use cases is customer service, which has been essentially the most exposed to AI, and we’re seeing a consumption growth 40% a year on human translation. So I’m seeing much more of the, wow, companies were starving really. Companies really only translated things that were so important that it was worth dealing with all the hassle, right? So if you reduce the friction, if you reduce the hassle, we should be translating more, and in fact, I think that’s what’s going to be happening.

Florian: I agree. I mean, like I have been saying now for eight years, if I could click a button and translate Slator.com into 20 languages without any quality concerns, I’d probably do it. But it’s a hassle and even in 2024, it’s still going to be a hassle. I’d be in a world of pain talking to the translation community about something in their language and it wouldn’t work. So, yeah, it’s not a solved problem, despite in the general public being considered one, I guess.

Vasco: I think that’s a great point, right? So websites are probably a great example because they’re complicated, right, it’s like an amalgamation of things. Part of your website is on the code and the other is on a database, and then it’s in content platform somewhere, and then you’re bringing stuff from social networks and so on and so forth. So it’s really hard to get that constantly updated in the quality that you want it. But it’s also the things that companies usually start with, right? And so it’s interesting that even the thing that is the first thing that companies do is not solved by any means, right, and it is a pain. I mean, even us, our website is translated and it’s a pain and we are a translation company, right, so I totally feel that.

Florian: That one’s the ironic part. Now, you guys kind of own the full supply chain and the technology, and you’re kind of an AI-first platform. Now, how do you see yourself positioned going into the more kind of the consolidation phase of this AI boom in the next couple of years? Because you own the whole thing, you don’t have to rely on too many third parties here when it comes to technology. So, yeah, how do you see yourself positioned there?

Vasco: I think there’s going to be four or five companies that are in the position to really take advantage of the consolidation, and I think we’re one of them. I think there’s three components that I mentioned. Any company that solves this needs to solve these three components, and that represents a large surface area, AI certainly at the core. I think, actually, I was thinking one of the downsides of the current stage of AI is that we’ve invested, I don’t know, I want to say about 60 million in developing the AI that we have, something like that. And I think now Challenger companies can do it much more efficiently because you can almost plug and play ChatGPT and get big bang for the buck, and so that creates more competition. But I think in the end you still need the integrations, the plugins, the platform, like everything else, the ability to have humans to translate and annotate and evaluate and everything else. So I think our goal is to just create a platform that really eliminates friction on doing the kind of things, like our goal is for you to be able to go and say, yeah, I want slator.com in 20 languages, click the button, right, and you can only do that if you actually own the entire value chain. Like our acquisition of Bablic is exactly to try to start tackling websites, right? I think Bablic is a great software for website translation, but it lacks the back-end, right? It lacks the ability to then integrate tightly with a translation platform and have all-in-one solution. I think that’s where we’re going to be going and I think for that we need to significantly reduce the complexity of dealing with the problem while enabling the outcome to be at the level that you need it to be, from a quality perspective, cost, speed, et cetera.

Florian: Now, you mentioned that friction removal layer, as it were, kind of this very kind of industry-specific and kind of localization-specific or LangOps-specific. What are your thoughts about the more foundational components that are now there’s this big battle going on between open-source and closed-source, between Meta on the one hand, with Llama and all these other things they’re launching like Seamless and kind of the OpenAI and Coheres on the other? Where do you stand in this and do you feel betting on open source is the future? Or do you think maybe connecting to an API for certain features or functionalities is better?

SlatorCon Remote June 2024 | $ 180

SlatorCon Remote June 2024 | $ 180

A rich online conference which brings together our research and network of industry leaders.

Buy Tickets

Register Now

Vasco: If other industries are an indication, I think there will be space for both. We’ve launched our own first LLM, called Tower LLM. I think from a language perspective, our bet is on smaller models that are more efficient and scalable for specific tasks. What we’re seeing is ChatGPT is great in a lot of ways, but it’s very hard to scale it, right? It’s unreliable, the requests fail a lot, it’s expensive, and so part of it is because of really large models are expensive to run. But I think once you start moving to Llamas, 7B’s and others that I’m sure are going to come that are just more efficient to run, you can probably fine-tune them for specific tasks and get a lot of efficient outcome. I don’t have a specific point of view of, oh, OpenAI will win or Meta will win. I think every time, like, a few years ago, if you asked me, it would be like, oh, we’re so far from having any sort of path towards AGI that who knows, right? But then ChatGPT came along and I certainly didn’t expect it and to be honest, no one in my team, no researchers that I talked to, was ever like, yeah, we totally see that coming. And we were already using language models at the time, so we were already using the technology, and we’re still surprised by the scale and the impact, right? Now we’re seeing the QStar rumors on OpenAI and ChatGPT-4.5, and it’s very hard to predict, right? Like, is there a threshold where suddenly agent-driven AI really starts taking off to the point where it changes everything again? The prediction of Ray Kurzweil was really on the exponential iteration of technology, right? So kind of Moore’s law also applied to the evolution of AI, meaning the doubling of the capacity should take half the time as before, and if that’s true, then we’re going to see evolution so fast that it’s really hard to predict. The difference between ChatGPT-3 and 4 was less than a year, right, and then if 5 comes out and it’s an equally large delta, it’s very hard to predict.

Florian: What makes it even scary is all these robot tweets I’m seeing. They’re plugging in the actual physical robots with the language models, and they’re doing things. And I think Elon tweeted out something with this new, I forgot the name, but like an actual robot walking around.

Vasco: So the robots walking around, it’s been interesting because I was reading an article today that made a great point, which is we figured out robots walking around a little bit, but it’s still bipedal, like biology, mimicry of humans might not be the best solution for this. Meaning, for example, if you have a robot carrying a box of steady stuff, okay, we can handle it. But if you have a robot carrying a bowling ball inside of a box, they will fall. Because what we do in robotics right now, we need to reduce everything to sub 10 dimensions to be able to manage the processing power, which means that we’re not nearly leveraging the amount of different forces that our body does. I mean, like the amount of different little muscles that you have to balance and the way your head positions in a certain way to kind of counteract and we can do that really well, robots less so, right, and it’s really hard. The hardware needed, the AI needed. I think Amazon has what the most robots deployed in factories. But I think we’re probably going to see more of robots with wheels or other ways of tackling the problem than bipedal robots that are very humanoid in nature. I think we’re very excited about them because they were like, oh my God, sci-fi, right? But in terms of when you actually look at the actuators and the ability to deal with a lot of varied tasks, autonomous vehicles should be much easier, and they have way less variables and we still don’t really have them to the extent that we thought we would, right? Will we get there at some point? Sure, like, you project long enough, 100 years from now, will we have humanoid robots? I’d be surprised if we don’t, right, but I don’t think it’s going to be the next five years.

Florian: Now, one of the things these robots will have to do is speak and listen and all of that and there’s been this huge push for kind of multimodal in translation AI, for lack of a better terms. How do you guys think about this? Because this is, to me, apart from kind of LLMs, one of the biggest breakthroughs this year, that we’re seeing voice and emotions in a voice and all of that. Is that something you guys are playing around or you’re, for now, focusing mostly on text?

Vasco: We’ve played around a little bit with it, so I don’t know if you saw this new project that we launched, which started out very connected to translation, is now evolving a different direction, which is Project Halo. That’s been a foray into something completely different, and in that, as part of that, we’re also doing voice morphing and a few other things and playing around with the interaction there. I think the core business of Unbabel is there’s so much to do in actually solving the problem and so I tend to be a bit stubborn on, like, it’s not solved yet, let’s just keep doing until it is. And I think we’re getting so much further now with everything that’s going on that I want to make sure that we don’t defocus, right, so I think that’s an important bit.

Florian: What do you think about, I got to phrase it this way, but multilingual content generation from scratch, and if somebody has a better suggestion, please let me know. But having some type of prompt or whatever and then clicking and having the model go off and write content in 20, 30, 50 languages. How do you see this as an opportunity for LangOps and maybe as a partial replacement for certain translation volumes or addition?

Vasco: I think that right now everything I’m seeing is there is a rapid evolution of content creation leveraging LLMs, gen AI, but there’s still a clear separation between content creation and content translation in the tasks in terms of who’s responsible for doing it in a company. And the main reason for that is, if you think about it, when you create content using ChatGPT, it’s very rare that it’s a zero-shot affair, right? You put a prompt, you look at the result, you tweak it, you go back, you say something, and then you have a couple of iterations, and then you get something that you like. It made it much easier than before, but it still wasn’t prompt done, right? And so whenever that happens, then if you ask that person, okay, now do this in another language that they don’t speak, they don’t want to do it because they say, well, I had to tweak the english version to get to where I wanted, so I’m sure it’s going to be the same, but now I don’t know what’s wrong, but I know probably something is wrong. It was kind of wrong in English, right? And so until we get to a point where we’re very comfortable with zero-shot content creation, I think we’re going to see separation between creation and translation. Somebody else needs to be responsible for the translation. Once we get to a point where like, hey, actually super comfortable, I just press a button and it creates in English, then I think once there’s no human involved in content creation, I think we’re also going to see much more push for having no human involved in content translation. But for now I think we’re not quite there yet.

Florian: But conceptually, and also practically, you don’t see this kind of eating into the translation pie, because initially, to me that was the obvious thing that was going to happen, like back a year ago, I’m like, okay, you click a button, forget the translation. But then the more I thought about it, the less I’m like, well, probably not because you actually do need the translation for this specific thing, otherwise you wouldn’t order it, so you haven’t seen that either? The separation persists.

Vasco: No, I actually think it could go both ways, right? It could actually create more need for translation because if creating content is easier and people do more of it, there’s going to be a content explosion. And if there’s a separation between content creation and content translation, then you’re actually going to have more need of translation. So even if the percentage of the effort required to translate in terms of using AI is higher, you’re still going to see more translation overall, right, and that is what we’re seeing so far.

LocJobs.com I Recruit Talent. Find Jobs

LocJobs is the new language industry talent hub, where candidates connect to new opportunities and employers find the most qualified professionals in the translation and localization industry.

LocJobs.com I Recruit Talent. Find Jobs

Florian: Let’s geek out a little bit on quality estimation because you guys have been one of the leaders in this space, which used to be an absolute niche, and now it’s gotten a lot more, I guess, prominence over the past 12 to 18 months. How do you see quality estimation? Where does it land? How does it route things around? Is it a feature, an enabler, a core thing?

Vasco: Yeah, and by the way, we’re still an absolute leader in this space. Like, no one has been able to beat the quality of our quality submission engine by far. I think what we’re seeing, and there’s a debate internally, is we’ve chosen not to open up our QE models as an API and as a business model. And I think we’re seeing other companies coming in and stepping in the void like ModelFront and others. We’re always happy to do a heads to heads comparison. I think that would be interesting, but the quality estimation task is a good proxy, and I think we’ve won that six out of the seven years, something like that. Now I think quality estimation, one way or another, will be an essential part of any transition pipeline because that’s really the thing that enables you to determine what is the level of human involvement you need for that particular task. And it’s that ability to do that in real time for every job to be able to say, yeah, I’m pretty confident this is great to the level that we’re trying to get to, or not so sure, let’s get a human to come in. That is really what enables you to capture a lot of the value of AI, because without quality estimation, you’re in a binary situation. You either always trust AI or you always put a human, and that negates the benefits of the human-in-the-loop approach, well, at least most of them. I think there’s still a long way to go in quality estimation. I think the best QE system right now that we have internally, still uses neural, kind of the equivalent of neural MT for doing the discrimination. But when you pair it with a gen AI model, you have not only quality in terms of the best accuracy, but you have high explainability of what are the errors, right, which is something that we didn’t get with neural MT. We’re like, okay, we know this is an error, but now with gen AI, will actually tell you this is what the error is and this is why we think it’s an error, which is for humans super valuable because you can then immediately see it. You’re like, oh yeah, I agree with that. That is an error and it gives you a way of understanding how to correct it, so I would be surprised. And I’m seeing that all TMS’ right now are trying to integrate QE modules because people are going to use AI more. And I think what we’re seeing is QE is very highly dependent on quality of data and I think that part of the advantage that we’ve had so far is we have 10 years of deeply annotated data with linguists specifically designed for quality estimation, right, and so that data is creating a big difference. We’ll see what will happen in the future. But I think, I expect QE to, I mean, QE as a size of a market, right, so if you’re just trying to sell QE in the big scheme of localization is not a big chunk of the market. So I don’t expect that to be a billion-dollar business in 2024, QE alone. But I think it’s going to be a required feature for any platform that wants to solve the entire problem. Without it, you’re kind of like not enabling a big chunk of the gains that you can get by leveraging gen AI.

Florian: And you want to kind of keep it close to your chest and powering your system as opposed to kind of giving it away as an API.

Vasco: Yeah, we’re evaluating. In a way, language operations platform should behave a little bit like AWS, right? It should be like, hey, look, you can come to Unbabel, create an account, create pipelines of translation, but also if you just want to create a pipeline of QE and just use that, or a pipeline to just use one of the services, you should be able to do it, right? And then like AWS, you don’t have to use the entire ecosystem. You can use just S3 or EC2 and then if you want to use the rest of the components, great. Or if you want to use your components, great as well. What I’m hesitant is to create this entire other motion, go to market motion to try to push QE that’s going to have a different pricing model and it’s going to require a lot of energy, right? I would rather keep this elegantly of like, hey, come to Unbabel, create an account, and then use any of the services within the consumption model that we have and then go for it.

Florian: Where does Unbabel MT and Unbabel Qi, I guess it’s Qi, fit into this? Because I went to the website, you have these options. For example, Unbabel MT lets you compare a number of off the shelf MT options for free, and then you have Unbabel Qi as well. Tell us a bit more about those two.

Vasco: Yeah, so Unbabel Qi is quality intelligence, and it’s basically a way for you to demo QE, right? So QE is something you use internally within a transaction pipeline, and Qi is kind of a demo around it that you can go in and try it and see the output and see how it works. The MT comparator is, before we used to just use the Unbabel MT, right, and now what we realize is, in many ways, MT is a bit more commoditized, and it’s sometimes not worth going out of our way to build a giant model for a language that maybe it’s not that big within the platform. And so that led us to think, okay, look, we need to kind of open up more and let people use whatever MT they want, right? It’s a bit like if you want to use DeepL inside of Unbabel, you can, you go in, you add your DeepL, that’s it, or any other model. If you think Unbabel is better for that particular use case, great, right? Like we have a lot of data in customer service, so it’s hard to beat us in customer service. But what we realize is the value is on how do we create this platform that enables you to do whatever you need to do efficiently, scalably, with low friction. And so the MT demo enables you to compare different MTs, or even to say, look, we’ll pick on a phrase, sentence by sentence base, the best output of every MT since RQE is able to discriminate it. I think what we’re trying to do is create a platform that has very low friction, so if you’re just starting out and you have your website and you have two languages or one language, you should be able to come in and without a lot of hassle, just get things going. But then has kind of progressive disclosure to take you all the way to a very complex deployment in an enterprise setting where you need, maybe you have a team of LangOps and you have different roles and attributions and customer reviews and everything else that you need to kind of scale it. And that enables you to do this across all use cases, all language fares, and so you should have really a platform that gives you agents and visibility and control over your deployment.

Florian: It’s so hard if you start at the small kind of initial, okay, I’m interested, I onboard that client, it’s one person, it’s kind of a marketing manager, and then you go all the way to these kind of giant translation, localization programs.

Vasco: It is, but we started from the top, right, and so for us it’s more like simplifying down. And I think I see companies in three stages roughly. And there’s, the first stage is essentially what you have really is a deputized LangOps, right? It’s the marketing manager, right? It’s like, hey, you’re not responsible for translating the website and they go in and we see this over and over. It’s like people always underestimate the complexity. You go like, oh, it should be easy, right, like next, next finish, and then they go through the process and they’re like, oh, damn, this was way harder than I thought, right? But they have one language and so they stick with it and you have this ad hoc type of thing and then the company grows and then now they go like, well, we now need, I don’t know, our sales collateral and maybe the marketing material as well, and we start going to more languages. And clearly the marketing manager says, hey, this is no longer my job, actually I’m supposed to be doing marketing, not translation. And at that point, kind of phase two, right, and companies either they hire their first localization person or they outsource that, right? They get an LSP and say, look, you take care of everything for us, right? And then phase three is when they actually say, no, look, we’ve grown big enough that it’s a complex problem. We need a solution that is able to deal with it and they have a localization team, hopefully in the future, LangOps team that is able. And if you think about it, both the first phase and the third phase are self service, right, like if you have your own localization team and it’s complex, your guys or your people are going to want to do things themselves. They want to be able to create the pipelines and figure out what’s wrong and how to fix it and if you’re just starting out as well, you want to do it yourself. It’s kind of the meaty middle is when you might opt for a third-party company to manage it for you or to do it in house, but you kind of need to have this progressive disclosure if you’re going to solve the whole problem. In our case, we started with kind of the big guys and we’re progressively bringing it down and simplifying and iterating. But in a way Mark Twain used to say, right, “apologies for this long letter, I didn’t have time to make it shorter”, and simplifying is an act of iteration and of getting it better and better. So everything you do that makes it easier for a solo deputized LangOps to use it is going to also make it easier for the people in more complex operations and vice versa.

Florian: Now, one thing that doesn’t make life easier for a CEO is M&A and this year you acquired EVS, a financial translation company and Bablic, right, you mentioned it before for web localization. So tell us a bit about the rationale and also like, integration challenges, ups, downs, et cetera.

Vasco: So two very different acquisitions. EVS is very much in line with Lingo24 in the rationale and Bablic was very much a product acquisition. I mean, I thought the Bablic team had just an amazing website translation software. We felt it was such a no brainer of integrating with Unbabel and having like a full solution to get to this holy grail of hit a button and have your website translated. I mean, I think that’s what we’re aiming to post-integration, and that’s going really well. I think for Lingo24 and EVS, it’s kind of the same strategy, similar strategies, which is acquisitions give us ability to access, one, new markets much more easily, two, new logos, and three, to really bring what we’re building more efficiently into those logos, right? And so for us, we’re looking at LSPs that typically have limited ability to really bring in technology and deploy technology into their customers in a moment where customers are very much wanting to be exposed to that technology, right? Everyone’s like, hey, how am I going to make this more efficient or scale it more? How am I even going to use my budget to do more translation than before or reduce my budget? And those are a lot of the conversations that those companies are having. And we’re like, we can actually have a win-win situation here where we can really superpower what you guys are trying to do with your customers and for them, it’s going to be a better experience.

Florian: Now let’s switch topic to something called the AI consortium in Portugal’s, and I got to read this, recovery and resilience plan, PRR. I was intrigued when I saw this. I think this came out kind of earlier in this year or maybe even late last year. So tell us about that consortium and your role in it.

Vasco: Yeah, so the PRR was basically, post-Covid, the European Union plan to help each of the member states, right? And so there was, for a few countries, allocation of european funds to a bunch of different initiatives to kind of restart the economies and get things going. In the case of Portugal, it was quite a big number for the size of Portugal, and a part of that ended up being for technology kind of technology initiatives. And what we realized was Portugal had a number of AI initiatives going on, a lot of different AI companies, really good AI talent. We were definitely punching above our weights, and there was this opportunity to think of Portugal potentially as a place where we could create a world class AI center focused on responsible AI, which is something that we care deeply. Between us and Feedzai and a bunch of other startups, I think it has 23 startups and six large companies and bunch of research institutes, we kind of all came together and say, hey, what do we look if we actually try to get resources to work on what are the common underlying infrastructure pieces that would enable us to create AI that is more transparent, that is more fair, that is less biased, that is what we call the pillars of responsible AI? I think we started working on this maybe a couple of years ago. This was the first year of the consortium working and the way it works is basically there’s pods of different areas, and on top of the common technology that has been created, there’s, I think, 20 something products that are expected to be created from this consortium, all within the framework of responsible AI. One of them is Project Halo. Unbabel has been leading the consortium. Paulo Dimas, which is my VP of Innovation, has done an amazing job of kind of bringing it together, but we think it’s an opportunity for Portugal to really kind of do something meaningful in this space. And it’s interesting because we started this before ChatGPT came out, right, and then ChatGPT came out, and it just exploded the whole responsible AI conversation, so it was very timely.

Florian: Let’s talk a bit more about halo. I think, as far as I could understand, it’s like a brain to computer interface for text-based communication. Sounds very science fiction-y, but what are the practical implications?

Vasco: So we started with this idea of cognitive amplification was kind of what I was thinking, and this idea that a bit like Elon Musk in Neuralink, this idea that our interaction with AI is very limited in terms of bandwidth, and that you write and speak at let’s say 120 words per minute, but your brain is processing, like, four 4k movies per second equivalent of information, right, so it’s big Delta. And if we could actually create bridges to AI that were closer, there were higher bandwidth, humans would benefit a lot. But what we wanted to do initially was, I’m not quite ready to put an implant in my brain and so we wanted to do something that was non-invasive, right, so we started looking at non-invasive neural interfaces. Initially, we looked at EEG, but EEG is still very noisy because of the skull, creates a lot of noise, so we moved to EMG. And then with LLMs, we started saying, well, how do you actually leverage the EMG interface to interact with an LLM? The first goal for that was we wanted to enable you to speak silently. So think of this as when people hear this, I think we’re reading thoughts, but it’s not quite the case. What’s going on is, same way you can think a lot of stuff, but you can choose what to speak, right, so your verbal communication is a communication channel. This is just another communication channel that you choose how to engage with. Now, think about it this way. And in the process of doing this, what we realized is there was an immediate, very obvious problem that we could solve on the way to kind of a long term vision, which was in the case of patients that, like ALS patients who have lost ability to control their bodies and with that, their voice. And so Stephen Hawkins is probably one of the most famous examples of an ALS patient was able to communicate at two words per minute. With the current state of the art technology, with ALS patients, which is kind of eye-tracking stuff, you get maybe eight to 10 words per minute, and what we realized is, with Halo, we’re already at 15 words per minute. And so we said, look, there’s a lot of people in pain. We started collaborating with the Champalimaud foundation, which is here in Lisbon and in Portugal, the association of ALS patients, and we said, why don’t we try this and see if this actually has an impact? And it was amazing, because basically, the way it works is you ask me a question, I’m hearing your question here. If you’re in the same room, I’m hearing it right away, of course. But the LLM is then trying, it’s an LM trained on you, and so it’s trying to predict what answers you maybe want to give. And then you use the neural interface to navigate what the LLM wants to say or is going to respond, and you choose, okay, yeah, that’s what I want to say. And by doing that, and because a lot of times we still have samples of the voice of the patients before they lost their voice, we can actually synthesize the answer in their voice and so it’s kind of like bringing back communication with their families. The first version, so we demoed this on web summit in November on the main stage, and it was crazy. We now have patients, I think, at this three patients right now using it. The results are incredibly encouraging. For the families it’s a game changer because it requires much less effort with a much bigger output. Like, for example, if someone asks you what would you like to eat? If you say meat, it will actually construct entire sentence around it, right, so “I would like to eat meat”. So you don’t have to say the whole thing. There’s a lot of advantages. First version was mostly about interaction with the LLM and kind of how you navigate that. The second version is starting to enable you to initiate conversations, kind of like mental typing, enough that the LLM can expand what you’re trying to say. But the results, like, it does look a bit like magic. When you see someone using it, it’s like, wait, they’re kind of responding telepathically, but that’s really what’s going on inside.

Florian: Tell me just briefly, like how you capture the signals. You said, what, EMG? I’m not an expert in that. What does that mean, and what, you have to wear something?

Vasco: Yeah, so EMG captures signals that you’re sending to the muscles. The demo that I did has the sensors in the forearms, and you don’t need to actually move your arms, but you need to kind of send signals as if you want to move your arms, right, and so there’s actually biosignals that are going to the muscles to kind of initiate movement, and that’s what it’s capturing. For ALS patients because those muscles tend to be some of the ones that go away quickly, we’re using a headband with a sensor here, and that actually works quite well.

Florian: I understand, so they’re basically signaling through in a binary way, but they’re getting options presented.

Vasco: Yeah, so it’s a bit like navigating a menu. So you can navigate the options that the LLM is presenting of things that you might want to eat or you might want to respond. And then the new version also enables you to, if the option isn’t there, to start typing something, right? And then the LLM is constantly learning about your environment and who you are and the people you’re talking to, and so it’s getting better at predicting the responses you might want to say. My co-founder’s mother, unfortunately, died of ALS, so for him, it’s very personal. It was interesting, heartbreaking, him describing the last year where they would be with his mother and the only way they had to communicate, because the eye-tracking stuff, they get really tired with the eyes and so it’s really hard to keep focus. And so they had to resort to showing letters one at a time, and she would blink to say, this is the right letter. But if they then made a mistake, there’s no way to tell them no, and then it’s extremely frustrating because it takes them half an hour to put a word together and it’s terrible.

Florian: Wow, but that’s an incredibly important thing to solve or to kind of advance, I guess, is a better.

Vasco: It is. We’re very excited about it and the team is, I think we’re going to see what we’re going to do in 2024. We’re thinking about spinning it off on its own. It needs its own resources. We’ve done amazing with it inside of Unbabel. It started with this idea, like, oh, can we kind of tap into language centers and translate things directly and so on. But it’s very clearly evolved into a product that has its own space in the market and kind of needs to come to market. And so part of the challenge 2024 is bringing out, how do we do that, right? How do we actually bring it to market in a way that benefits the people whose pain it can solve?

Florian: And in the core business, what are some of the things that are on your roadmap, initiatives in 2024?

Vasco: I’ve been telling the team, I want 2024 to be a boring year, maybe a little bit less emotional roller coaster. We know what we need to do, right? Like, look, we’re growing the platform, we’re growing the use cases, we’re deploying it in customers, we’re acquiring. It’s just getting better at this, right? It’s like, how do we make the product always easier, more capable, constantly producing higher quality, faster, cheaper? It’s just rinse and repeat. There’s something about at a point of a scale up, when you kind of hit product-market fit and you’re starting to realize, not just product-market fit, but go-to-market fit, where you understand like this is how we’re going to grow. That becomes just about repetition, right? Just about just getting really good at nailing the core components and I think that’s where we are right now. So ideally, just doing more of this without a lot of emotional roller coasters.