What Groq’s Language Processing Unit Could Mean for AI Translation

Groq Language Processing Units for Translation

Groq’s language processing unit (a.k.a. “LPU”) is pretty much the antithesis of a “chip off the old block.”

While the company behind the technology, Groq (not to be confused with X’s AI bot Grok) has been around since 2016, it was only at the beginning of 2024 that the LPU burst on the scene as the next big thing — specifically for large language models (LLMs).

Speaking on the popular All-In Podcast, Chamath Palihapitiya said of Groq’s newfound popularity, “It could mean nothing, but it has the potential to be something very disruptive.” 

Palihapitiya is the founder and CEO of VC firm Social Capital, which first invested in Groq in 2016. He said that Groq’s last valuation, “a really important moment in the company, and very exciting,” was over USD 1bn.

“Essentially, we had no customers two months ago, I’ll just be honest,” Palihapitiya told his podcast co-hosts. Since then, Groq has gained about 3,000 unique customers trying to use Groq resources, “from every important Fortune 500 all the way down to developers.”

Silicon Valley-headquartered Groq has differentiated itself from its top competitor (and now 2 trillion-dollar behemoth), NVIDIA, through speed.

Palihapitiya described the standard central processing unit, or CPU, as the “workhorse of all computing.” Finding that CPUs failed at certain tasks, NVIDIA developed the graphics processing unit (GPU), which was better suited to handle multiple tasks, such as producing images used in gaming.

Groq founder Jonathan Ross wanted to innovate beyond the GPU by making chips both smaller and cheaper. These attributes would make them optimal for powering LLMs. 

In particular, Groq’s LPUs excel at inference, retrieving (hopefully useful) answers for users in a satisfactory way (i.e., very quickly at a very low cost).

Corporate communications expert Lulu Cheng Meservey observed on X, “Some people will judge Groq on the quality of the output, so the team should keep reminding people that the point isn’t to compare [Mistral] to Llama or whatever, the point is that each model runs faster on Groq than on other chips.”

For AI-powered translation, the speed of LPUs could be a game-changer, promoting the so far elusive “instantaneous” translation that markets would like to see in consumer-facing products. 

Mihai Vlad, former General Manager at Language Weaver, told Slator that Groq has taken GPUs from “specialized hardware” to “generic.” 

“Groq’s ultra-fast inference is removing the slow response times that were plaguing LLMs’  applications,” he added. “Real-time translation with just LLMs? We’re almost there.”