For much of the last year, it’s felt like Large Language Models (LLMs) have been the only game in town. While the State of AI Report predicted that transformers were emerging as a general purpose system back in 2021, significant advances in capabilities caught both the AI community and wider world by surprise, with implications for research, industry dynamics, and geopolitics.
Last year’s State of AI report outlined the rise of decentralization in AI research, but OpenAI’s GPT-4 stunned observers as big tech returned with a vengeance. Amid the scrabble for ever more compute power, challengers have found themselves increasingly reliant on its war chest. At the same time, the open source community continues to thrive, as the number of releases continues to rocket.
It has also led the drawing of new fault lines, with traditional community norms around openness under pressure from both commercial imperatives and safety fears.
We’ve seen technical reports on state-of-the-art LLMs published that contain no useful information for AI researchers, while some labs have simply stopped producing them at all. One of the co-founders of OpenAI went as far as describing their original open source philosophy as “flat out … wrong”. In contrast, Meta AI has emerged as the champion of open(ish) AI, with their LLaMa model family acting as the most powerful publicly accessible alternative…for now.
The discussion around openness is taking place against the backdrop of an impassioned debate about how we navigate governance and (existential) risk. As we forecast in last year’s report, safety has shed its status as the unloved cousin of the AI research world and took center-stage for the first time. As a result, governments and regulators around the world are beginning to sit up and take notice. This has been all the more challenging as the many of the mooted models of global governance require long-standing geopolitical rivals, currently locked in the chip wars, to cooperate. Indeed, State of AI Report co-author Ian Hogarth has been seconded to chair the UK Government’s Frontier AI Taskforce, so has therefore stepped back from writing this year.
However, this is the State of AI, not the state of LLMs, and the report dives into progress in other areas of the field - from breakthroughs in navigation and weather predictions through to self-driving cars and music generation. This has been one of the most exciting years to produce this report and we believe that it will have something for everyone - from AI research through to politics.
The report is a team effort and we’re incredibly grateful to Othmane Sebbouh, Corina Gurau, and Alex Chalmers from Air Street Capital without whom the report wouldn’t have been possible this year. Thank you to our reviewers who kept us honest and to the AI community who continue to create the breakthroughs that power this report.
We write this report to compile the most interesting things we’ve seen, with the aim of provoking an informed conversation about the state of AI. So, we would love to hear any thoughts on the report, your take on our predictions, or any contribution suggestions for next year’s edition.
Nathan and the Air Street Capital team
Authored on the interwebs by: