Welcome to State of AI Report 2022

Published by Nathan Benaich and Ian Hogarth on 11 October 2022.


Download 2022 Report

This year, new research collectives have open sourced breakthrough AI models developed by large centralized labs at a never before seen pace. By contrast, the large-scale AI compute infrastructure that has enabled this acceleration, however, remains firmly concentrated in the hands of NVIDIA despite investments by Google, Amazon, Microsoft and a range of startups.

Produced in collaboration with my friend Ian Hogarth, this year’s State of AI Report also points to an increase in awareness among the AI community of the importance of AI safety research, with an estimated 300 safety researchers now working at large AI labs, compared to under 100 identified in last year's report.

Small, previously unknown labs like Stability.ai and Midjourney have developed text-to-image models of similar capability to those released by OpenAI and Google earlier in the year, and made them available to the public via API access and open sourcing. Stability.AI’s model cost less than $600,000 to train, while Midjourney’s is already proving profitable and has become one of the leaders in the text-to-image market alongside OpenAI’s Dall-E 2. This demonstrates a fundamental shift in the previously accepted AI research dynamic that larger labs with the most resources, data, and talent would continually produce breakthrough research.

Meanwhile, AI continues to advance scientific research. This year saw the release of 200M protein structure predictions using AlphaFold, DeepMind’s advancement in nuclear fusion by training a reinforcement learning system to adjust the magnetic coils of a tokamak, and the development of a machine learning algorithm to engineer an enzyme capable of degrading PET plastics. However, as more AI-enabled science companies appear in the landscape, we also explore how methodological failures like data leakage and the ongoing tension between the speed of AI/ML development and the slower pace of scientific discovery might affect the landscape.

    Key takeaways. We hope the report has something for everyone, from AI research to politics:
  1. New independent research labs are rapidly open sourcing the closed source output of major labs. Despite the dogma that AI research would be increasingly centralised among a few large players, the lowered cost of and access to compute has led to state-of-the-art research coming out of much smaller, previously unknown labs. Meanwhile, AI hardware remains strongly consolidated to NVIDIA.
  2. Safety is gaining awareness among major AI research entities, with an estimated 300 safety researchers working at large AI labs, compared to under 100 in last year's report, and the increased recognition of major AI safety academics is a promising sign when it comes to AI safety becoming a mainstream discipline.
  3. The China-US AI research gap has continued to widen, with Chinese institutions producing 4.5 times as many papers than American institutions since 2010, and significantly more than the US, India, UK, and Germany combined. Moreover, China is significantly leading in areas with implications for security and geopolitics, such as surveillance, autonomy, scene understanding, and object detection.
  4. AI-driven scientific research continues to lead to breakthroughs, but major methodological errors like data leakage need to be interrogated further. Even though AI breakthroughs in science continue, researchers warn that methodological errors in AI can leak to these disciplines, leading to a growing reproducibility crisis in AI-based science driven in part by data leakage.

The report is a collaborative project and we’re incredibly grateful to Othmane Sebbouh, who made significant contributions for a second year running, and Nitarshan Rajkumar, who supported us this year, particularly on AI Safety. Thank you to our Reviewers and to the AI community who continue to create the breakthroughs that power this report.

We write this report to compile and analyze the most interesting things we’ve seen, with the aim of provoking an informed conversation about the state of AI. So, we would love to hear any thoughts on the report, your take on our predictions, or any contribution suggestions for next year’s edition.

Enjoy reading!

Nathan and Ian



Download 2022 Report


🏡 Home 📧 Sign up to our newsletter

Co-authored in London (UK) by:


Creative Commons Licence
This work is licensed under a Creative Commons Attribution 4.0 International License.