On October 11, 2022, the 113-slide, open-access State of AI Report 2022 was released to an enthusiastic, AI Twitter crowd. For the fifth consecutive year, the report aimed to trigger an informed conversation about the state of artificial intelligence (AI) in research and industry as well as its implication for the future.
Detailing the exponential progress in the field of AI and focusing on developments since last year’s edition, the report was authored by Nathan Benaich, General Partner, Air Street Capital; Ian Hogarth, Plural Platform cofounder; Othmane Sebbouh, machine learning PhD student, ENS Paris, CREST-ENSAE, CNRS; and Nitarshan Rajkumar, PhD student in AI at the University of Cambridge.
“We believe that AI will be a force multiplier on technological progress in our world, and that wider understanding of the field is critical if we are to navigate such a huge transition,” the authors wrote.
🪩The @stateofaireport 2022 is live!🪩
— Nathan Benaich (@nathanbenaich) October 11, 2022
In its 5th year, the #stateofai report condenses what you *need* to know in AI research, industry, safety, and politics. This open-access report is our contribution to the AI ecosystem.
Here's my director's cut 🧵:https://t.co/QtkXZcQpJj
After defining the most important terms for readers to gain more understanding, the authors dove into the technology breakthroughs and areas of commercial application for AI.
Zooming In on Language
Large Language Models (LLMs) are applied to domains beyond pure natural language processing (NLP) with capabilities surpassing expectations in some cases (e.g., in Mathematics). Moreover, the authors predict a range of tasks that could soon be successfully tackled but which are currently out of reach of current LLMs.
I'm most excited about applying large models to domains beyond pure NLP tasks as we think of them.
For eg., LLMs can learn the language of proteins and thus be used for their generation and structure prediction (@airstreet stealthco!).
Here too, model and data scale matters: pic.twitter.com/wb68oI3QGu— Nathan Benaich (@nathanbenaich) October 11, 2022
However, according to a DeepMind study, current LLMs are significantly undertrained (i.e., they’re not trained on enough data given their large size). Training LLM requires big tech partnerships — such as Microsoft’s USD $1bn investment into OpenAI — and the authors “expect more to come.”
6/ Relatedly, DeepMind revisited language model scaling laws and found that current language models are significantly undertrained: they’re not trained on enough data given their large size. This would suggest access to data becomes a bottleneck for progress at the frontier pic.twitter.com/HTReDA5iBl
— Ian Hogarth (@soundboy) October 11, 2022
Even though a lot of things have changed over the last five years, the attention layer at the core of the transformer remains “entrenched,” as Hogarth tweeted. Analyzing transformer-related papers in 2022, the authors found that this model architecture has become more ubiquitous, becoming truly cross modal and gaining ground in multi-task challenges.
20/ The Transformer has only become more ubiquitous, becoming truly cross modality and gaining ground in world models and multi-task challenges. pic.twitter.com/tPZx2I4w2g
— Ian Hogarth (@soundboy) October 11, 2022
The authors also reported a widening compute chasm between industry and academia in large model AI with the academia passing the baton to decentralized research collectives funded by non-traditional sources. “The chasm between academia and industry in large scale AI work is potentially beyond repair,” said the authors.
Meanwhile, academia is left behind, with the baton of open source large-scale AI research passing to decentralised research collectives as the latter gain compute infra (e.g. @StabilityAI) and talent.
— Nathan Benaich (@nathanbenaich) October 11, 2022
In 2020, only industry and academia were at the table. This changed in 2021: pic.twitter.com/JgvS05HAyh
Interestingly, compared to the US, China is growing its output of published papers at a faster pace. Chinese papers focus more on speech recognition, text summarization, natural language, and machine translation, among others. However, the quality of such research papers has been questioned by Michael Kanaan, author of T-Minus AI, in a recent tweet.
For the first time, the State of AI Report has a dedicated AI Safety section aiming at drawing attention to this challenge. As reported, AI Safety research is seeing increased awareness, talent, and funding.
8/ For the first time we have created a dedicated AI Safety section to draw attention to this challenge. The UK has taken a leadership position here with its national strategy for AI, which made multiple references to AI safety and the long-term risks posed by misaligned AGI. pic.twitter.com/oHuo7YpFpG
— Ian Hogarth (@soundboy) October 11, 2022
More specifically, “AI researchers increasingly believe that AI safety is a serious concern. A survey of the ML community found that 69% believe AI safety should be prioritized more than it currently is,” as Hogarth tweeted. Meanwhile, the EU has advanced with its plans to regulate AI.