Next Generation Localization Software with Phrase CEO Georg Ell

SlatorPod #180 - Phrase CEO Georg Ell

Georg Ell, CEO of Phrase, joins SlatorPod to talk about key moments since he joined the localization platform in May 2022, merging Memsource into the Phrase brand, and the opportunities of AI-driven growth of content requiring localization.

Georg discusses Phrase’s product offering, which includes machine translation (MT) aggregator Phrase Translate and proprietary MT engine Phrase NextMT, where they provide customers with flexibility, neutrality, and solutions that align with their localization needs and varying maturity levels.

Georg unveils Phrase’s upcoming pricing structure revamp, set to launch in September, which aims to provide customers with streamlined access to the entire suite of capabilities across Phrase TMS, Phrase Strings, Phrase Orchestrator, and other features.

Georg shares a perspective on the future of a hyper-personalized internet enabled by generative AI. He discusses the importance of AI-driven workflows, complex custom configurations, and dynamically evaluated machine learning models.

Subscribe on YoutubeApple PodcastsSpotifyGoogle Podcasts, and elsewhere

Georg advises on how LSPs should embrace AI and MT. He also touches on funding in the language technology sector, and how they intend to invest in areas that align with the company’s growth strategy.

Transcript

Florian: Georg is the CEO of Phrase, one of the world’s leading all-in localization language tech platforms. Hi Georg, and thanks for joining. So you have a very interesting background, like being both on the operator side in technology, but also as an investor on the VC side. Just kind of give us a bit of the view of your background and how you got to lead Phrase as the CEO.

Georg: Thank you, so I’ve been very lucky, I think. I’ve had amazing jobs for wonderful companies. So I started my career early on in technology at Microsoft. I went after a number of years there and being early in the Enterprise Cloud sales team, including actually one of the very earliest deals we did was with Novartis in Switzerland, right, so spending a lot of time in Basel. In 2009, imagine doing enterprise cloud in 2009 with Swiss pharmaceuticals. At least a few of these wrinkles are from that time. And then went to Yammer, which was a Series B startup at the time, and ran the European team there with an amazing group of people, like, really strong culture at Yammer. Went from there to another small startup in the video technology space, only spent a few months there before going to Tesla, where I initially ran the UK and then Western Europe through 2014 to 18. So that was very high growth, very innovative, very intense cultural experience. And then from Tesla to an education technology business in the UK, which was private equity-backed, so my first experience working with private equity as opposed to venture-backed or public company. We sold that to an Australian company in 2021 and I’m still on the board there and that’s actually a public business on the Australian Stock Exchange, so I have now a board seat in a public company. I’m an advisor with two venture capital firms, one in the UK with LocalGlobe, one in the US with Craft Ventures, and then have sat on first one and now the second one with Phrase, private equity-backed board as CEO. So somehow it wasn’t planned this way, but, yeah, sort of experience with the different investor classes. But I’m 98% operator and 2% sort of on the investment side and mostly they’re advisory rather than doing direct investment, but very much an operator by experience and mindset.

Florian: To take on the CEO role at Phrase, did that come in through the investors or just tell us a bit of a backstory there?

Georg: After selling the education company Smoothwall to this Australian business Family Zone, the whole thing’s rebranded now, Qoria, just to make it more confusing. But after that I was looking for a new role and so I did what everybody does and I started talking to recruiters and saw a number of opportunities and what was then called Memsourcewas an opportunity that I came across. And that was really the first time I started to dig into the language industry and I became fascinated. It was one of those discovery experiences where the deeper I got into it, the more interesting and more interested I got in the whole subject. Because, as you and your listeners know, it is so multilayered, there’s so much wonderful complexity. And I got into understanding the culture of this industry and I got to know then the people at Phrase, both at the board level and then at the company, and I fell in love, basically. And I’ve had the most amazing last year and a half. Now, when I’m recruiting people, I say, I’ve been at Phrase year and a half. I’ve been in the language industry for a year and a half and I love it and I tell them why and I talk about the culture and the people. And I think as an industry, because so many people in it speak English as a second language and do a lot of their work in a language that’s not there first and the nature of language means that it attracts people with a love of learning, with a high degree of curiosity and a love of teaching. And if you love to learn and to teach, then you have an immediate kind of respectful, curious rapport with people in the company, across other companies, with partners, with competitors, with our customers and that’s unusual, actually. And maybe people who’ve spent 20 years in the language industry don’t appreciate or maybe they do.

Florian: I think you’re right. I don’t think they do because I’ve spent almost my whole professional life in this industry and I’m like, yeah, this is just normal.

Georg: It is special. Like a lot of other industries are pretty cutthroat and aggressive and that sort of thing. Look, of course there’s competition in this space, but it’s healthy and it’s respectful. I like it. It’s really nice.

Florian: Do you have like a couple of maybe anecdotes or something from your learning curve and maybe how in the past 18 months and how different I mean, you described it now, that kind of environment. But maybe just a couple of anecdotes a couple of episodes that were particularly memorable, especially from somebody with your background coming from outside, looking at this mix of tech and services. And of course, now with AI, what were some of the kind of moments you remember from the past 18 months particularly?

Georg: A really early memory was going to my first LocWorld in June and I had started in April, I think, so I was a couple of months in last year in 2022. So it was LocWorld Berlin and I was wandering around the hallway looking at how everyone seemed to know everyone else and I didn’t know anybody at that point. People would say hello in the lifts and like I said, it’s a super friendly place. Or one of my colleagues would introduce me to somebody and they would smile and shake their hand and then they’d kind of do that thing where they look at the badge and who are you? And they’d read my name, okay, I don’t know this guy, and then they would see Memsource CEO and they’d say, ah, the Memsource CEO, fantastic. And I realized what a brand Memsource had and how much credibility it carried and how much love there was for the company and its work. So I immediately felt like I’ve joined a great team because this is a company that people love, so that was a really affirmatory, confidence inspiring moment for me. And then I think there was another moment subsequently when we launched the Phrase Brand. And I stood up on stage in San Jose and I was talking about that, and we started to tell the story of the rebranding and why we did it and trying to make Phrase the name that people would remember and sort of I was joking a bit with the audience and repeating it many times and they laughed a bit and I thought, we’re making some progress here. We’re carrying people with us on this journey, so that was quite important. But look, I think I spent the first six months on a really steep learning curve, really super steep. And it was in San Jose that it started to fall into place for me as I was having customer meetings and I felt like I understood 60% to 70% of what I was talking about and what they were talking about and for the rest I was relying on my amazing team to educate me. And then in 2023, I just feel like I’m putting it all together and now I’m really confident. Like the vision that we’ve built here and the team that we’ve built I feel is super strong and AI is absolutely the core of it and I’m sure we’re going to talk about that because then 2023 has been this invigorating maelstrom of a year around that subject. I think with November, December GPT 3.5, I think it was a party trick and everyone was Christmas parties and it was sort of interesting and cool. When 4 landed, I think everyone went into a little bit of a tailspin. The industry in sort of April, May was really asking itself a lot of searching questions and I think by June people had started to calm down and figure it out. And I think I’m very happy with how we sort of sailed through that in a relatively calm way and then we’ve built a great strategy around it, so those are some key moments.

Florian: I want to talk about the unified branding. You mentioned that the name Memsource, like the beloved name of that product that had been doing very well in terms of also the marketing of the name Memsource over a decade and now you decided to unify it under Phrase and this is something that we rarely see in the language industry. Typically people are very cautious in M&A and rebranding and kind of unifying and some companies kind of keep the original brand and then buy the brand that owns it, et cetera. Now, what kind of drove you to unify this and what were some of the challenges during that exercise? Because you did it well and fast and like with conviction, I guess, from the outside.

SlatorPod – News, Analysis, Guests

The weekly language industry podcast. On Youtube, Apple Podcasts, Spotify, Google Podcasts, and all other major platforms.

SlatorPod – News, Analysis, Guests

Georg: Firstly, thank you, that’s very kind. I would underline that word conviction. That’s actually a really great word so I’ll use that I’ll return to so actually interestingly, the story starts with David Čaněk, the Memsource founder, because it actually was his decision to move the brand of Memsource to Phrase. And I was there the day he announced it because I was actually still in my interview process and so I was in Prague kind of in a back room and having dinner with David that evening. And that afternoon he announced to the whole company that although Memsource had acquired Phrase, the combined entity was going to adopt Phrase’s name and retire the Memsource name. So I thought that was extremely brave and a great credit to David and it shows the sort of low ego that he has and that we try to continue in our management team and our executive team and how we operate as a business. We’re a low ego culture, so that was the first thing. That’s the genesis. Now why Phrase? I think it was important to bring the two companies under one brand. We wanted to make sure that we were operating as one business like seamlessly, no join inside the company. We didn’t want people inside the business to have two tribes, two identities. We wanted to be extremely respectful of the success of the people who’ve been involved in both companies historically, and we referenced that frequently, and we still do. But to move forward is one. I think he and the team explored a number of different options, but in the end, the answer was sort of staring us in the face, which was if you had a blank sheet of paper, Phrase is a great name. It’s a five digit URL. It’s a single syllable, it’s .com, it’s about language, it’s just recognizable, and we were able to play with the branding. Then we used an outside agency to play with the branding. So I came in, I would say halfway through that project, we spent six months working on a lot of the details. Key steps, so in the first couple of months that I was around, I noticed that the integration of the two businesses was about 80% complete, so the PNL was one, the teams was one. We still had two email domains, two websites. We had a lot of these sort of internal and external branding things that still created some divisions, so we said, we’re going to use the new branding internally from June. We’re going to use the name internally from June. We only launched it in September and we’re going to be very clear with conviction that we’re not going to refer to colleagues or products as legacy, this legacy that. We are Phrase. And so when we do a weekly all company meeting here every Friday, and we change the branding of that to Phrase, we changed all the internal software where we could. That it wouldn’t kind of give the game away in terms of working with suppliers and partners and customers. So we rebranded as much as we could internally in all the new colors and everything like that. So we started to live it before we announced it publicly and we were just very clear. We’re going to do this with super conviction and then we sponsored LocWorld. I stood up on stage there and I talked about remember Phrase, and I said it lots of times and people laughed. And that, I think, helped cement it in people’s memories. With any change like this, you have to repeat it, repeat it, repeat it.

Florian: With the conviction and the marketing push and everybody is kind of behind it, I think it typically happens quicker than people might expect, right? I mean, you’d think, oh, this brand has been around forever and how could people ever forget? But over a couple of years, especially if it’s something like Phrase, phrase.com, I mean, memorable, actual word, makes it a little easier, right, than some complex compound. Also on the product side, you seem to have streamlined. I haven’t followed it super closely, but now we have like Phrase TMS, Strings, Translate, Orchestrator, which I want to talk about later Analytics and NextMT. So you’ve got six kind of distinct, I don’t know, products, I guess for lack of a better term for now. Now, can we just talk a bit about Translate? Like Phrase Translate and Phrase NextMT. So if I understand this correctly, one is kind of more of a hub of plugging in third-party engines and the other one, NextMT is the actual proprietary Phrase solution. So, A, did I get this right? And B, if yes, what’s the thinking behind launching and developing your own MT solution?

Georg: You did more or less get it right. Let me give you the… I’ll try and build out to the bigger picture and it then links into roadmap and vision and announcements we’re making in September and some fun stuff. Phrase has been doing machine learning things for about five years now and Phrase Translate is a combination of services actually, which use AI and machine learning in a variety of ways to bring the best of machine translation engines to customers and actually, I think it’s almost… We’re not sufficiently well known for this as we should be because we’re actually very good at it and doing some amazing work with some really big customers. So Phrase Translate is a machine translation aggregator at the moment that has an AI auto select capability. So based on the content that a customer puts through it, we’ll analyze the content type and language pair and we will then use the auto selection capability to apply the best engine of a range of engines to that piece of content. We work with, I think about 30 third-party engines, we work with all of the hyperscaler engines and we can also plug in customers own engines and in some cases LSPs have their own engines and we can plug those in too. Customers can select the range of engines that they want the system to work with, so they could say we want you to work with these six engines and then pick the best of those, or these 12 engines, or these 30 engines, or these two engines, right? So the customers can make that selection according to their own criteria and then Phrase NextMT is our engine and we make that available as one of the options. We never prefer our own engine, we always provide the best solution to the customer. The reason we built our own engine was that we can leverage translation memory and term bases and therefore actually improve the quality of our engine in combination with translation memory. So that actually the quality, when you combine those two things is often at or above the level of generic engine, even from one of the hyperscalers, so there’s a quality opportunity there. It also then goes to roadmap and we will make a number of announcements in September. So we do a regular quarterly cadence of announcements and the next one is in September and the theme this time around is around defining the next generation of localization software and AI is absolutely the heart of that and so there are a number of cool things that we’ll do there. So I can’t give you the detail, but I will paint a picture for you if you’d like. So before I launch into that, hopefully that made sense in terms of Phrase Translate being the aggregator and NextMT being our engine as one of the options available.

Florian: It’s much more open than either being only the aggregator or kind of forcing everybody onto your own solution, right? So it’s probably also a lot easier when you speak to lead clients.

Georg: That’s it and it highlights, I think, two important strategic things that people should know about Phrase. One is we’re a software, we’re not services and we have no intention to be services. We’re also not a marketplace, so we’re SaaS company and we’re very clear about that. And I talk about that with customers because there are some customers who want a kind of all-in-one turnkey solution and there are companies out there that will provide that and that is how their CEO would describe them: we are the turnkey solution. We’re specifically not that, we’re a software company and then it goes to the important point too, which is around neutrality. So we’re extremely neutral about providing the best solution to the customer and working with professional services organizations and the customer can select. So we are the right option for those customers who want to control their translation memory, own their own language IP, own the workflow, define that, do custom workflow with Orchestrator, and we can talk about that. But then they want to work with a bench of multiple professional services organizations and freelancers so that they can manage a translation and localization pipeline, manage it for quality and cost, and then they can work with professional services organizations that are strong at this type of thing and that type of thing and that type of thing. And then they can manage those relationships accordingly and change them where they need to and we are sort of completely neutral to all of that. So you see that in our work around AI and some of the roadmap that we’ll talk about there, and then you see that in the role that we play.

SlatorCon London 2024 | £ 980

SlatorCon London 2024 | £ 980

A rich 1-day conference which brings together 140+ industry leaders views and thriving language technologies.

Buy Tickets

Register Now

Florian: It also implies that an ideal customer would have to have a certain localization maturity, right, and some in-house capacity and headcount to manage this?

Georg: I think that’s, broadly speaking, true. We do work with customers at a range of different maturity levels. I think we do our best work with customers who have a relative degree of complexity, which isn’t necessarily the same as maturity, because you see some very large global companies with very limited localization maturity, but actually very complex requirements. And so when they start to get into the game, they start to realize, oh, this is harder than we thought, and we might be absolutely the right solution for them, but they need to… And we can help them get up the maturity curve to the appropriate level for them to drive value. I think what we sometimes see is that those large enterprises start, then think this can’t be that hard, and they choose a simpler solution with a nice kind of shiny user interface. They spend a year with that solution, and then they realize our complexity is far ahead of what this can offer us and then they come to us kind of the second step. And that’s actually a more painful experience for them, but they don’t often realize that until they go through that pain. So I think we do a lot of our best work with customers who have complex requirements, and that can be small companies or big companies with some complex requirements and then, yeah, there’s a degree of maturity that comes with that.

Florian: You have, as one of the not few, but it’s relatively rare, you have SaaS pricing on your page. What we just discussed seems like more complex. You probably very quickly get into the enterprise tier of the SaaS pricing. What’s the reason for having these lower tiers with the 99, 299 type of tiers on the website? Is it more kind of a funnel to the enterprise or?

Georg: We have about four and a half thousand customers, and I’d say 4000 of those are using those lower tiers, so we have a lot of those types of customers. And so that is a good example of actually where there are lots of small organizations that have some degree of reasonable complexity in their requirements and then we’re a great solution for that. We have about, let’s say, 500 customers who are operating at the higher tiers and get into enterprise pricing. Now, we have been spending months doing a complete revamp of our pricing, and we will start to launch that at the end of September. So I will give you the big picture, if you’d like, and I’m pretty excited by it. It then ties also into product vision and roadmap. So, look, our pricing follows a pretty classic and standard approach at the moment, where we have multiple products and you can buy any individual one of those products or you can buy a combination of those products. What we started to hear in the last nine months was that, and really in the lead into the summer, and then I started to validate this with customers at LocWorld in Malmö. What we started to hear was that customers said, look, we’ve grown up with one of your products, TMS or Strings, for example, and what we started to notice is that historically, we would have used you for one workload and we’ve used someone else for another workload and we’re used to the fact that we then pay a license fee to you for this and a license fee to them for that. But we see that you have these capabilities in your suite and we want to be able to solve our problems with all of the capabilities in your suite, but there’s this problem, which is we bought a license for product A, and we want to use a capability out of product B, and you’re asking us to buy a whole license in order to do that, and then that’s a problem because that’s like not necessarily, we don’t need all of it, and we’ve got to go back through procurement and blah, blah, blah. So there’s this barrier that is human constructed because we ultimately control pricing and licensing. It’s not easy, but it’s complex, but we control it. So we started to ask ourselves, like, what could we do to remove some of the barriers between our products so that customers could more easily move across the whole suite? And when I say the whole suite, I mean the whole suite, I mean TMS Strings, Orchestrator, Phrase Translate, new capabilities that we’re going to release in September in the realm of machine learning, the whole suite, so that’s what we’ve been working really, really hard on. We’ll do it gradually, so from the end of September for new customers will have a new way of pricing and then next year we’ll start to offer existing customers the opportunity to roll into that and I think we’ll try and design that in such a way that it’s attractive for them. So there’s plenty of work still to do and it’s going to happen in phases. But the idea is that if you were a new customer, you would buy Phrase, you would no longer buy TMS or Strings or another license for Orchestrator or this, that and the other. You would just buy Phrase and you would get access to everything. And the idea is we want customers to help themselves and we want to help customers use all of our massive range of capabilities to solve their problems on a jobs to be done basis to drive value. And this is the thing that I often say to customers, if you look at our range of TMS, Strings, Orchestrator, Phrase Translate, machine translation aggregation and so on, I now have for months been saying to customers, we are a superset of at least six other companies, six or seven other companies that you could buy. And I won’t name them because I don’t want to embarrass anybody, but if you’re a customer, I will name them. But you could look around at the stands at LocWorld and so on, and you could see this company, that company, that company, that company, that company, well-known companies, and we are the superset of 80% to 90% of the capabilities of these other companies: TMS providers, MT aggregators, workflow companies, and like machine language quality estimation, right. We’re the superset of all of those capabilities today, not in tomorrow’s roadmap, today. And so what we’ll offer customers is a very well priced way to get access to all of that capability set at a fraction of what it would cost them to buy the services from six other companies. And then they can expand in whatever direction they need to from there once they go beyond the initial kind of volume, if you like, that we’ll provide. So if their immediate need is for more volume and more capability set of option A, they’ll be able to do that and vice versa into option B and C and so on. I think that’s pretty exciting because I think what we’ll find is that customers suddenly start asking themselves pretty deep and searching questions about actually why not explore the full range of capabilities of something like Phrase. It doesn’t mean they can’t use other technologies to do some of those workloads. We’ll still play well in a heterogeneous environment. We absolutely want to do that and we believe in sort of composability and that kind of thing. And that’s inherent, so that goes into the product vision piece here because if you sort of think about what I’m saying, take the bucket of capabilities that we currently call TMS and the bucket of capabilities we call Strings, and the barrier that is pricing and licensing between those things, we take that barrier away. Then we build Orchestrator across the top, which is the custom workflow engine that allows you already today to dip into all the APIs and webhooks that we enable for all the capabilities of those buckets of things. So now you’ve got the technology and we’ve enabled the licensing, so the circle now isn’t around those two things individually or three things individually, the circle is around all of them collectively. You buy all of that and you can now build a workflow that allows you to build custom workflows with all the capabilities across both of those things and translate and other things. And that is really a unique offering both in terms of the workflow and the breadth.

Florian: It’s interesting how it’s kind of becoming clear even from an outsider’s point of view like myself, right? I mean, I’m looking at it and even though I don’t look at it for days or weeks, even just looking cursory at the website like, okay, now they’re starting to integrate a lot of capabilities that used to be separate or by somebody else and it’s kind of becoming this much more comprehensive offering and also it’s very much kind of up to scratch or speed. What’s kind of the latest trend, right? For example, machine translation quality estimation is a big thing. Orchestrator in a no-code, drag and drop kind of building environment and then obviously around that kind of core and then you got the again with NextMT, your proprietary MT, which you kind of need, in my view as a TMS to be competitive or remain competitive, right, because it’s kind of a foundational piece of the value in my view. But you still kind of give people the option of plugging in their own MT so they don’t wander off or think something else is greater. Yeah, interesting product vision and I’m sensing that a lot is coming down the pike in September, so I appreciate you giving us kind of a heads up there.

Georg: I think the right way to approach it is actually from the five-year vision and then back to the next 12, 18 months and I’ll sort of do a short version of it. I think you’ve seen the video I put out before, but others may not have done so the short summary would be as I’ve been thinking about generative AI and its implications, I ask myself why we visit websites and have a static experience. So with the exception of some degree of localization, if anyone in this audience goes to nike.com or pick a website, then we’ll all have a broadly similar experience or certainly two people who are having the same localized experience will have a broadly similar experience. Again, maybe small variation depending on the device type that you’re using, that kind of thing, but it’s sort of minimal, it’s not dynamic. Now the promise I think of generative AI is that a website like Nike and by extension the whole internet could be streamed in real time. So there could be no canonical version of a website, so like hyper-personalization. So I could go to nike.com, I love rugby and I love sailing and I could have water sports and rugby-based content. I may have recently visited a stadium and so it would know that and stream me like relevant information. It may be that when you go to Nike you’re interested in the technology and the tech specs of the latest AirFly or Vaporfly, sorry, I think they’re called, and you may be particularly focused on an athlete’s story and how he or she has captured your imagination. The possibilities are endless and therefore the content streamed in real time is endless. And by the way, shouldn’t be the same when you make your next visit, which could be 1 hour later or the next day or the next week, but the next visit should be different again. And so I was imagining a world in which the entire internet is real time, the real-time internet streamed content. And the implications of that for volume are literally astronomical because it’s not ten, hundred thousand X increase in volume of sort of content produced, it’s a billion X or a billion billion x. When you have billions of internet users and billions of websites the numbers get kind of astronomical. So then I was trying to bring back… Well then in that future a couple of important things happen. The first one is that the classic pyramid of content value and volume gets inverted because classically today our storefronts, our websites are our high value content, very heavily curated, a relatively low volume and high degree of human involvement in localization, that kind of thing. But user-generated content, the review websites and that sort of thing are high volume, relatively low value, maybe machine translated at the bottom. Now, I think this pyramid then has a massive inversion. You can’t quite see my hands on the thing, kind of like an inversion because suddenly the high value content is machine-generated and there’s no marketing department in the world that can keep up, so the humans simply don’t generate as much. Even an audience of billions cannot create as much content as the machines do at the top of the pyramid. So your high value content is the bulk of the volume, so that is a massive shift, so then playing it back to what we’re building today. And the amazing thing is that, sort of by luck or judgment, the team here before I joined and since I’ve joined, have been building the right things for this exact moment in time. So for 10 plus years we’ve been building API first, five plus years we’ve been doing machine learning and two years plus, we’ve been building the Orchestrator and advanced workflow. Because in this world of massive amounts of volume, you need to build for scale, you need really complex workflows and you need big investments in artificial intelligence. So back to what we’re building today, all those buckets of capabilities that we already have that I’ve talked about, kind of you start to see a little bit of a cloud around those capabilities so one can dip in and out of them increasingly easily and fungibly. Really complex custom workflows that can be designed across the top to enable all of that and then what we need, critically, is dynamically evaluated machine learning models. Such that as content comes in from all the third-party integrations, it is evaluated based on the customer’s constraints, which might be time, budget, volume and other things. So based on those constraints, we’ll dynamically evaluate the universe of machine learning opportunities. Multiple LLMs, multiple MT engines, including our own, but one of probably hundreds that are available out there, the customer’s own, including custom trained models that the customer has, probably multiple of those, and we will pick the right tool at the right time for the right customer, so we have fit for purpose machine learning applied. We’ll then use AI to evaluate the output of that, use AI to fix it, and then we’ll use AI and custom workflows to push out to the human beings that which is still required to be reviewed. And in a world of astronomical volumes, even tiny percentage samplings by humans is high volume. So we absolutely still see an important role for the human beings in this because a small percentage of a very large number is still a very large number, but it’s now incredibly complex. So the big buckets of investment for us are AI workflow and scale. We are strategically not making a bet on any individual LLM or MT provider. Our bet is to be the best in the world at quality, visibility, dynamic assessment at every stage, and then using all of the tools that are out there. Helping customers build their own where they want to with software, not with services and using complex custom configurable workflows to bring all of that together and then push it back out to all the third-party distribution network that customers need. So that’s ultimately how we see our role in customers helping get to the real-time internet. And the thing about the vision which works, I think, is that I can afford to be very wrong. It may not be a billion billion X content, it may merely be a million X content. I could be orders upon orders of magnitude wrong and still directionally correct in building this thing. And what we’re trying to build is ultimately one that doesn’t require us to build the LLM that wins, right? Let others fight that fight. And customers are asking, how do we make sense of this? How do we adopt it? And we can be the answer to that. We can say, we can help you make sense and adopt it with quality according to your business constraints. That’s our vision. That’s the roadmap.

LocJobs.com I Recruit Talent. Find Jobs

LocJobs is the new language industry talent hub, where candidates connect to new opportunities and employers find the most qualified professionals in the translation and localization industry.

LocJobs.com I Recruit Talent. Find Jobs

Florian: Can we just dwell on the hyper-personalization for a second? Because I’m struggling to understand it from an abstract point of view. However, I did see about three months ago, this example that kind of went viral on X, I guess we have to say now, where somebody launched like a marketing campaign in the US for, I think it was a used car dealer. Bmaybe just for the listeners, you bought this car in 2013 and remember the feeling when you got your, I don’t know, Volkswagen Beetle or your Chevy whatever, and it was just very hyper-personalized to that individual, right, so it’s a million different versions for each customer.

Georg: I actually saw that like a couple of weeks after I put my original video out, and I was like, oh, my god, I’m not crazy. This feels like the prototype version and so you’re right. So what they did is they got an actor to voice like loads of permutations in a big spreadsheet. And they also said, so when the customer bought a car, as they were waiting for that car to be delivered to them, it would know where the dealership is, it would know what type of customer, the customer’s address, the route between the two things, the weather at the time, any events along the way and then it would send all of that key data to like a cartoon, animation, generative AI thing. So it created this visualization with a voiceover that was automated from many permutations based on what it knew about the customer and some inputs and I thought there is a really early example of exactly this idea in practice.

Florian: Plus now, well, it works in the US and American English, and that’s the most high data rich language in the world. So what if you want to do this in Europe across 30, 40 languages, becomes a little more complex and I guess that’s where companies like Phrase will come in.

Georg: Absolutely. It becomes much more complex and when it’s being sort of generated real time as well, then we’ve got to start thinking about model drift. We’ve got to understand that what worked yesterday might not work today or tomorrow and so how do you continuously maintain and manage the quality and then what do you do when you find errors? You’ve got to start to write algorithmic approaches to error checking. If you’ve pushed out… Because the problem, of course, when you’re generating the high volume, high value stuff, the high value stuff at high volume is that mistakes get amplified very quickly. So when you find them, how quickly can you route them for review and then how much checking is enough before you say, I don’t need to check everything, right? So there’s a lot of complexity, and it’s exactly into that complexity, that workflow and then using AI to solve those problems that we are building. And everything I’ve described in terms of that roadmap is like 12 to 18 months for us. It’s not five-year vision. The five-year Vision is like a billion X content and everyone’s building the real-time internet. Helping companies take the first steps towards that real-time internet is our work today in the next 12 to 18 months is really just a maturity because everything I’ve described is today’s products moved forward. We don’t have to invent much that is new. And on the quality estimation, quality visibility side, we’ve just hired, of course, Alon Lavie and his team. He invented the COMET score. He’s essentially famous as the godfather of quality estimation in our industry. And what he doesn’t know about isn’t worth knowing, and he’s super excited. The vision is why he joined.

Florian: How does the Orchestrator come in? Because I can’t code, so I’m using a lot of no-code. Like, I probably subscribe to like 30 SaaS for Slator and like a few of them myself. I like the kind of drag and drop and just kind of building things. I’m using Zapier for certain things with Slator. So how does the Orchestrator come in? Is this kind of an early step for you to enable the clients to kind of self put some of this together and just tell us a bit more about that?

Georg: The Orchestrator is a no-code visual editor that allows you to drag and drop multistage and branched workflows that can be very complex using any API and any webhook into, at this point, I think all of the capabilities of Phrase Strings and Phrase TMS. It at the current stage of its evolution still requires a reasonably strong and technical working knowledge of the products because you need to be able to input certain fields, so that you can drag and drop the workflow but then for it to be able to identify from the previous step which record it’s trying to pull into the next step and that kind of thing, you need to know the terminology of the products and that kind of thing. So it is still for a reasonably technical user that’s proficient in Phrase, but it is not coding, it’s not development. And in its next iterations, we’re going to drive the simplicity so that it gets closer and closer to a business user being able to define those steps and those workflows without being too technical, so it’s on a path towards that. So the value proposition is really the following, which is that historically, companies like us have the capabilities that are available out of the product with some basic configurability and then we have an API. And if you want, as a customer, if you want something it doesn’t do out of the box and you can’t configure it, we say, and companies like us say, here’s the API. You must have some developers sitting around that can work on this and then the customer says, yeah, sure, I have like an army of developers sitting around doing nothing, right? No. The value proposition is, instead of waiting six months to write a business case, get it approved, wait for four developers, a PM and a QA person to become available, who can then write something to the API in two or three months and then test it, and then when you get it, it’s almost not quite what you originally wanted nine months previously, but that’s now the one thing that you got in the one version and then they get dragged onto another project, right? Instead of that cost and time delay, a customer, with or without a little bit of our help, we’re very happy to provide it, can write the same workflow and the same complexity in an afternoon. And if it isn’t quite what they wanted, they can change it the next afternoon, and then they can write ten more like it in the next month. And so it allows for customers to experiment, to run complex custom configurable workflows, to then adapt those to the changing needs of the business with vastly more flexibility and vastly less cost and less reliance on their own development teams that get dragged away from them than the alternative, so it is very powerful. In September, we’ll continue to evolve that product as well, to make it easy to use and to provide customers with examples and starter packs and things like that. So we launched it in February, it was capable but had some sort of volume limitations. In May, we removed almost all of those. In September, really, that is all gone. It’s like fully capable. It starts to get easy to use, we start to make it easy for customers to get started and we’re running workshops with some of our biggest customers now to really deep dive into how they can make the most of it. It is once a customer… It’s been an interesting journey for us with it because it’s something that customers conceptually nod along to, but what we’ve discovered is they need to try it and then when they try it, their eyes like light up and so we’ve got to get customers to try it. So I’d encourage every listener to have a go, talk to us, get your hands dirty with it and then you’ll find it’s phenomenal.

Florian: It needs to be super easy because I did watch the launch video, I think it’s on YouTube, available on YouTube, and I’m like there’s a couple of things I’m like, oh, that would take me a little bit of processing power mentally. And if I don’t have somebody to back up, like I might just kind of churn out because I’m like ah. So the easier it is, the more drag and droppy, the more zero-code, the better, I guess, from a kind of adoption point of view.

Georg: This is exactly true, so that’s why it still requires today, like I said, a sort of reasonably technical understanding of the products and we want to simplify that. But if any of our customers want a bit of help working with solution architects, then of course we can help build workflows and then once they get into the rhythm, it gets a lot easier. So we’ll make it easier and we can help customers and then we’re going to start to provide some starter packs as well.

Florian: I guess it’s the same with Zapier. I can do 70% of what I want, but then sometimes I need to call somebody who knows it better than me.

Georg: That’s it. Each of us has our threshold, so I’ve played with Zapier and got stuck very quickly because I’m not technical enough. You can probably get five steps further and then the next person can get another five steps further and that’s exactly it.

Florian: When I talk to outside investors looking at the language industry and they’re asking about the impact of these LLMs and gen AI, et cetera, I typically… One of my key arguments would be that the kind of TMS’, in my view at this point, are kind of ideally positioned to leverage these LLMs for kind of end user deployment. Now let’s define the end user as kind of the linguist, right? We talked a lot about enterprise clients, but you also mentioned that even if a fraction of content needs to be looked at by somebody. We at Slator, we call it experts-in-the-loop. So what kind of features or capabilities do you see could be possible now with LLMs from your current expert-in-the-loop, translator, multilingual editor point of view, right? Maybe, I don’t know, shorten, lengthen, adjust to vocab, is there anything you can share on that side?

Georg: Yeah, lengthen always makes me laugh. Someone told me that they had a capability to take a 2000 word document and turn it into a 10,000 word document, and I said what’s the use case for that? It’s students because they want to write the 500 word summary and then have it churn out the 10,000 page report and then turn it in, so apparently that’s the use case for lengthen. But anyway, I think there’s a lot that could be done. I think our focus there is always going to be on efficiency. So how do we help the linguists make the most impact in the shortest space of time? And when I say most impact, we’ve got to remember that with what we’re aiming to build and the amount of AI quality estimation and quality visibility and being really dynamic about it that we’re trying to build in, what we want to do is reduce as much as possible the percentage of the volume that humans need to review, that the experts need to review. But our bet is that the volume is going to go bananas and I’m very convinced that this is true. And interestingly, I talked to some of our content partners, the very, very large organizations that do content and they have the same perspective, actually. We have a similar vision of the real-time internet. So I do think the volumes are going to go through the roof, so small percentage, large number. But then remembering again, that where a lot of this is generated, if the machines get it wrong, they will propagate that mistake very quickly at scale and then there’s a significant brand risk associated with that. So the ability to catch it and act on it quickly is key. So I think the investments we’ll make for linguists is we’ll identify the things that really matter. So there’s a lot of quality basically that goes into that and then we’ll try and help you act on that as fast as possible, so kind of efficiency will be key too. So those would probably be the big themes. I’ve seen some companies build some reasonably visually attractive and fancy but I would say gimmicky things in that space and yes, of course, you can provide suggestions and shorten and all that stuff, like that’s fine, but that’s not actually the value. Everyone will do that. That’ll be table stakes. The value will be how that gets tied back into a workflow and a propagation system that goes to these issues of addressing high value, fast propagation content to reduce risk and ensure good quality where it’s needed. And actually making those judgments is where the real technology is more of value.

Florian: That’s so tough to make these calls. I can only imagine, for me as an analyst looking from outside, I can just look at it and think about it, but you have to make these calls. What is gimmicky and what is just kind of a feature or what is the core capability? Must be not easy. From the linguist to the LSPs, I mean, you used to also be obviously have some clients on the LSP side as well. But what’s your strategy working with LSPs, having LSPs as clients?

Georg: Yeah, so we still do. We have a very substantial percentage of our revenue is with LSPs and we work with six of the top 10 global LSPs and then many hundreds of LSPs and the next sort of thousand LSPs after that, and plenty of freelancers too. So LSPs are important to us and I think have an important role in the industry. I think that I’ve talked a lot about the importance of humans-in-the-loop and the complexity of that. And clearly a large part of the work that LSPs do is manage and provide expert humans to solve specific problems. I think also that in the world of model training, there is a role for LSPs. I’m not sure all LSPs are embracing that, some more than others. And I think that my advice, if they’re interested in what I have to say, which they may not be, to LSP leadership would be you need to invest in understanding AI and MT and embrace it in a meaningful way, see it as an opportunity, not a threat. And the opportunity there is, I think, to bring value added skills around model training to enterprise customers because not all enterprise customers are going to have that skill set in-house, just like they don’t have many skill sets in-house. And we talked earlier about how some customers with complex requirements may have low maturity in terms of in-house skill sets around localization, and they choose to outsource that, and that has historically been the work of LSPs. They’ve had this value added skill set around localization, they provided that as an outsourced function. So I think when you get into these very sophisticated technical areas where actually there are whole new jobs being invented to do with model training, prompt engineering, not all companies are going to feel like that is their core skill set, that they need to employ expensive people to do that for them in house and that can be a service that an LSP provides. So I think LSPs have a really important role to play inside of all of that complexity. Our software is and will be used both sides of the enterprise LSP relationship, so that there’s really good transparency and visibility for both partners in the enterprise and LSP relationship on quality and cost and so on. So I think that’s kind of the message I’d have for LSPs is to embrace it, work with us, partner with us. We’re working very closely with some LSPs. We’re trying to work closely with some others who candidly are still having a sort of, I think, identity crisis about some of this and then that’s an area where we’d love to work with them more closely and we’re here for them when they’re ready.

Florian: I think a big win was the Lionbridge deal. I think we can speak about it right. It was very public, multi-year kind of agreement with Lionbridge. Also shows that Lionbridge feels that this is something they need to get from somebody else and that they’re providing their USP somewhere else rather than building this core tech functionality.

Georg: I think Lionbridge would put it a bit differently and so I don’t want to speak on their behalf because I know they’re very proud of their own technology. So as a technology partner to Lionbridge, I should emphasize they are very proud of their own technology and rightly so, but we’ve become a core component in their technology stack and a very close partner with them. And the partnership has multiple levels, technical and commercial and we do a lot of training for their teams so that they’re adept and expert in their use of Phrase with their customers as well. So ultimately we want Lionbridge and we want other LSP partners to be highly certified so they’re very expert at using Phrase in their accounts and we’re trying to provide them with a tremendous amount of support there. But absolutely, Lionbridge was a phenomenal win for us. They’re a strong partner, we’re grateful to work with them and they’re global and they’re complex and they’re challenging and those are exactly the kinds of customers we like to work with. They push our boundaries. In fact, one of my engineering directors said to me that the best projects are the ones that come in from the outside because the ones we surface ourselves sometimes we stay in our comfort zone. The ones from the outside take us outside of our comfort zone and I said I couldn’t agree more. When the customer says I want to scale to volume ten X what you’ve done before, I rub my hands together.

Florian: Now a lot of those initiatives will take capital, right? Whether that capital comes from revenue or from funding or investment, depending on where you are in your stage as a company. So can you just tell us currently the funding kind of landscape? You have some probably originally I think it was, might not even remember, might have been bootstrapep Memsource very originally, but then private equity with Carlisle and then since then you did a venture debt round with a Canadian, you tell me what it was, right, so I guess a couple of questions. Maybe just lay out the specifics for Phrase’s recent funding activity and then generally how do you see the current funding environment for something like a business SaaS app versus the whole AI buzz that’s currently kind of… There’s this obviously two sides to the story, one is a little slower and the other one is a major hype.

Georg: It’s certainly a hype. AI is definitely a hype and actually, if you listen to some of the tech podcasts and you and I shared some anecdotes around that earlier on, they talk about how AI funding at seed stage still makes sense because the sort of very early, like even pre-seed, you can still write a check at a valuation that makes sense. But anytime you’re into big seed or A rounds on AI, the valuation is through the roof and it’s one of the only areas at the moment where that is still true. So AI is definitely in a hype cycle. So, firstly, to describe our funding, so yes, you’re absolutely right. So the companies were bootstrapped. Carlisle invested in Memsource and helped Memsource to acquire Phrase and then actually injected some further equity in June 2022 and then we did a debt round. I only raised my eyebrows at venture debt because venture debt has some specific connotations in terms of the terms, and we didn’t have those terms. So without going into the details of the terms, we had terms that we were very comfortable with. We have a great partner with CIBC, which is a Canadian bank, so they’ve been a good partner, we have great terms with them, but it’s a straightforward debt rather than venture debt, and that’s given us some additional capital. We kind of pre-allocated a lot of that money into AI, actually, so a lot of it’s gone into the investments there. So our three big buckets again, AI, workflow and scale, we’re investing really heavily in those three areas. I think the funding environment in sort of language tech is interesting. I was talking to an analyst, someone you’ll know, and actually she said to me, and I won’t name her because the opinion was privately shared, but the sense with private equity is that the good deals have been done, so I think there’s been private equity investment in a number of companies in language tech, and I think the good companies have been funded, or the good companies of the type of scale that private equity would be interested in. Let me characterize it like that. I think there’s still some venture money there, but the valuations now would not be what they were December last year. I think there were some very heady valuations at some extreme revenue multiples and they would not be what they are today. They would not be today what they were then. And I think people are having to grow into those valuations and that can be very challenging. So I think it’s sober. I think it’s sober. I think the important thing for all of us is to make sure that we’re investing heavily in the right areas with the right vision to be able to get the revenue growth because revenue is the best source of funding. Fortunately, we’re continuing to do okay.