Welcome to State of AI Report 2023

Published by Nathan Benaich on 12 October 2023.


👋 Read the 2023 Report

For much of the last year, it’s felt like Large Language Models (LLMs) have been the only game in town. While the State of AI Report predicted that transformers were emerging as a general purpose system back in 2021, significant advances in capabilities caught both the AI community and wider world by surprise, with implications for research, industry dynamics, and geopolitics.

Last year’s State of AI report outlined the rise of decentralization in AI research, but OpenAI’s GPT-4 stunned observers as big tech returned with a vengeance. Amid the scrabble for ever more compute power, challengers have found themselves increasingly reliant on its war chest. At the same time, the open source community continues to thrive, as the number of releases continues to rocket.

It has also led the drawing of new fault lines, with traditional community norms around openness under pressure from both commercial imperatives and safety fears.

We’ve seen technical reports on state-of-the-art LLMs published that contain no useful information for AI researchers, while some labs have simply stopped producing them at all. One of the co-founders of OpenAI went as far as describing their original open source philosophy as “flat out … wrong”. In contrast, Meta AI has emerged as the champion of open(ish) AI, with their LLaMa model family acting as the most powerful publicly accessible alternative…for now.

The discussion around openness is taking place against the backdrop of an impassioned debate about how we navigate governance and (existential) risk. As we forecast in last year’s report, safety has shed its status as the unloved cousin of the AI research world and took center-stage for the first time. As a result, governments and regulators around the world are beginning to sit up and take notice. This has been all the more challenging as the many of the mooted models of global governance require long-standing geopolitical rivals, currently locked in the chip wars, to cooperate. Indeed, State of AI Report co-author Ian Hogarth has been seconded to chair the UK Government’s Frontier AI Taskforce, so has therefore stepped back from writing this year.

However, this is the State of AI, not the state of LLMs, and the report dives into progress in other areas of the field - from breakthroughs in navigation and weather predictions through to self-driving cars and music generation. This has been one of the most exciting years to produce this report and we believe that it will have something for everyone - from AI research through to politics.

    Key takeaways:
  1. GPT-4 is the master of all it surveys (for now), beating every other LLM on both classic benchmarks and exams designed to evaluate humans, validating the power of proprietary architectures and reinforcement learning from human feedback.
  2. Efforts are growing to try to clone or surpass proprietary performance, through smaller models, better datasets, and longer context. These could gain new urgency, amid concerns that human-generated data may only be able to sustain AI scaling trends for a few more years.
  3. LLMs and diffusion models continue to drive real-world breakthroughs, especially in the life sciences, with meaningful steps forward in both molecular biology and drug discovery.
  4. Compute is the new oil, with NVIDIA printing record earnings and startups wielding their GPUs as a competitive edge. As the US tightens its restrictions on trade restrictions on China and mobilizes its allies in the chip wars, NVIDIA, Intel, and AMD have started to sell export-control proof chips at scale.
  5. GenAI saves the VC world, as amid a slump in tech valuations, AI startups focused on generative AI applications (including video, text, and coding), raised over $18 billion from VC and corporate investors.
  6. The safety debate has exploded into the mainstream, prompting action from governments and regulators around the world. However, this flurry of activity conceals profound divisions within the AI community and a lack of concrete progress towards global governance, as governments around the world pursue conflicting approaches.
  7. Challenges mount in evaluating state of the art models, as standard LLMs often struggle with robustness. Considering the stakes, as “vibes-based” approach isn’t good enough.

The report is a team effort and we’re incredibly grateful to Othmane Sebbouh, Corina Gurau, and Alex Chalmers from Air Street Capital without whom the report wouldn’t have been possible this year. Thank you to our reviewers who kept us honest and to the AI community who continue to create the breakthroughs that power this report.

We write this report to compile the most interesting things we’ve seen, with the aim of provoking an informed conversation about the state of AI. So, we would love to hear any thoughts on the report, your take on our predictions, or any contribution suggestions for next year’s edition.

Enjoy reading!

Nathan and the Air Street Capital team



🏡 Home 📧 Sign up to our newsletter

Authored on the interwebs by:


Creative Commons Licence
This work is licensed under a Creative Commons Attribution 4.0 International License.