Less than a year after its August 2021 founding, accent-translation technology startup Sanas raised USD 32m in a series A funding round, led by Insight Partners. TechCrunch reported post-money valuation to be USD 150m.
GV, Assurant Ventures, and angel investor Gokul Rajaram also participated, as did existing investors Human Capital, General Catalyst, Quiet Capital, and DN Capital.
BPO heavyweight Alorica is also entering a strategic partnership with Sanas to bring the technology to Alorica’s 100,000 employees and 250 enterprise clients (including Assurant).
The Palo Alto-headquartered company is the brainchild of three students from the Stanford Artificial Intelligence Lab (SAIL).
Maxim Serebryakov, Shawn Zhang, and Andrés Pérez Soderi, all first-generation immigrants to the US themselves, were inspired by the struggles of a friend who worked at a call center in his home country of Nicaragua during a leave of absence from Stanford. Despite his fluency in English, the friend endured abuse from US customers who did not like his accent.
The platform, which is already being used in China, Japan, and South Korea, allows users to select the accent they want to use in real time. Serebryakov, Sanas’ CEO, has been quoted as comparing the technology to social media avatars for user appearances, but applied to a user’s voice instead. Sanas is reportedly compatible with more than 800 communication-related apps.
To prevent potential abuse, customers do not use a cloud-based version of the technology, which sits on-premise. Customers control their own data, which passes and is generated through Sanas. The software can be used only by individual speakers, who in turn can control just their voices.
Sanas is currently working to patent its technology and process, which involves feeding thousands of hours of differently-accented speech into an algorithm and matching phonemes with other sounds.
Not Speech Translation
The Sanas platform works for any language. Although the company bills itself as offering “real-time translation without the latency of text-to-speech,” Sanas’ “accent translation” does not enable a user to speak another language. Rather, it modifies a user’s pronunciation of a given word in the language spoken.
Thus, “accent translation” is vastly different from speech translation, into which the likes of Apple and Facebook have heavily invested. What Sanas does appears to be closer to voice cloning, one of the steps that can be used in AI dubbing.
The Sanas team, comprising 80% immigrants, envisions its initial target user base as customer service agents at offshore call centers, whose challenges mirror those of the founders’ friend.
SlatorCon Remote September 2022 | Early Bird $120
A rich online conference which brings together our research and network of industry leaders.
Their work also makes for a simplified use case, as agents need to be able to communicate clearly with customers, but not necessarily convey a wide range of emotions during these interactions.
In addition to opening an office in India — an ideal location based on the many call centers located in the Asia-Pacific region — the team plans to use funds from the series A to recruit more experts in R&D, AI, and tech.
Sharath Keshava Narayanaion, formerly of Observe.AI, joined Sanas as cofounder and COO in May 2022. Insight Partners Managing Director, Ganesh Bell, and Aryaka Networks Chairman of the Board of Directors, Ram Gupta, will join the Sanas board.