Inside the Large Language Model Revolution with Nikola Nikolov

SlatorPod #160 - NLP expert Nikola Nikolov

In this week’s SlatorPod, we are joined by Nikola Nikolov, an experienced researcher, engineer, YouTuber, and consultant in natural language processing (NLP) and machine learning.

Nikola talks about the evolution of large language models (LLMs), where the core technology remains the same, but the number of parameters has grown exponentially and the capacity to fine-tune models on human data via reinforcement learning from human feedback has turbocharged the models’ capabilities.

Nikola unpacks the rapid increase in front-end use cases with companies like Google and Microsoft already integrating LLMs into their products. At the same time, he speculates about what will happen to the hundreds of startups that are using APIs to build similar tools like writing assistance or summarization.

Nikola shares the limitations of an API-only approach, which include using a model limited in data it has collected from the internet and that is not fine-tuned to a domain or specific use case. 

Subscribe on YoutubeApple PodcastsSpotifyGoogle Podcasts, and elsewhere

He discusses how LLMs perform when it comes to machine translation (MT). Although GPT is trained on large amounts of multilingual data, it’s not specialized in translation, so machine translation providers will retain their edge over ChatGPT for now.

Nikola predicts two different scenarios when it comes to the future of LLMs: the first is where large corporations quickly integrate LLMs into their products, competing with startups and putting many of them out of business. The second scenario is where startups will create novel use cases and integrate multimodal technology to build something completely new and different from big companies.