New Report: Multi-Agent Risks from Advanced AI

The development and widespread deployment of advanced AI agents will give rise to multi-agent systems of unprecedented complexity. A new report from staff at the Cooperative AI Foundation and a host of leading researchers explores the novel and under-appreciated risks these systems pose.

Powerful AI systems are increasingly being deployed with the ability to autonomously interact with the world and adapt their behaviour accordingly. This is a profound change from the more passive, static AI services with which most of us are familiar, such as chatbots and image generation tools. On the other hand, while still relatively rare, groups of AI agents are already responsible for tasks that range from trading million-dollar assets to recommending actions to commanders in battle.

In the coming years, the competitive advantages offered by autonomous, adaptive agents will drive their adoption both in high-stakes domains, and as intelligent personal assistants, capable of being delegated increasingly complex and important tasks. In order to fulfil their roles, these advanced agents will need to communicate and interact with each other and with people, giving rise to new multi-agent systems of unprecedented complexity.

While offering opportunities for scalable automation and more diffuse benefits to society, these systems also present novel risks that are distinct from those posed by single agents or by less advanced AI technologies (which are the focus of most research and policy discussions). In response to this challenge, staff at the Cooperative AI Foundation have published a new report, co-authored with leading researchers from academia and industry.

Multi-Agent Risks from Advanced AI offers a crucial first step by providing a taxonomy of risks. It identifies three primary failure modes: miscoordination (failure to cooperate despite shared goals), conflict (failure to cooperate due to differing goals), and collusion (undesirable cooperation in contexts like markets). The report also explains how these failures – among others – can arise via seven key risk factors:

  • Information asymmetries: Private information leading to miscoordination, deception, and conflict;
  • Network effects: Small changes in network structure or properties causing dramatic shifts in system behaviour;
  • Selection pressures: Competition, iterative deployment, and continual learning favouring undesirable behaviours;
  • Destabilising dynamics: Agents adapting in response to one another creating dangerous feedback loops and unpredictability;
  • Commitment and trust: Difficulties in establishing trust preventing mutual gains, or commitments being used for malicious purposes;
  • Emergent agency:  Qualitatively new goals or capabilities arising from collections of agents;
  • Multi-agent security: New security vulnerabilities and attacks arising that are specific to multi-agent systems.

Though the majority of these dynamics have not yet emerged, we are entering a world in which large numbers of increasingly advanced AI agents, interacting with (and adapting to) each other, will soon become the norm. We therefore urgently need to evaluate (and prepare to mitigate) these risks. In order to do so, the report presents several promising directions that can be pursued now:

  • Evaluation: Today's AI systems are developed and tested in isolation, despite the fact that they will soon interact with each other. In order to understand how likely and severe multi-agent risks are, we need new methods of detecting how and when they might arise.
  • Mitigation: Evaluation is only the first step towards mitigating multi-agent risks, which will require new technical advances. While our understanding of these risks is still growing, there are a range of promising directions (detailed further in the report) that we can begin to explore now .
  • Collaboration: Multi-agent risks inherently involve many different actors and stakeholders, often in complex, dynamic environments. Greater progress can be made on these interdisciplinary problems by leveraging insights from other fields.

The report concludes by examining the implications of these risks for existing work in AI safety, governance, and ethics. It shows the need to extend AI safety research beyond single systems to include multi-agent dynamics. It also emphasises the potential of multi-stakeholder governance approaches to mitigate these risks, while acknowledging the novel ethical dilemmas around fairness, collective responsibility, and more that arise in multi-agent contexts.

In doing so, the report aims to provide a foundation for further research, as well as a basis for policymakers seeking to navigate the complex landscape of risks posed by increasingly widespread and sophisticated multi-agent systems. If you are working on the safety, governance, or ethics of AI, and are interested in further exploring the topic of multi-agent risks, please feel free to get in touch or sign up for the cooperative AI newsletter.

Links
Suggested Citation

“Hammond et al. (2025). Multi-Agent Risks from Advanced AI. Cooperative AI Foundation, Technical Report #1."

BibTeX Entry
Plaintext Code Block
@TechReport{CAIF_1,
  author       = {Lewis Hammond and Alan Chan and Jesse Clifton and Jason Hoelscher-Obermaier and Akbir Khan and Euan McLean and Chandler Smith and Wolfram Barfuss and Jakob Foerster and Tomáš Gavenčiak and The Anh Han and Edward Hughes and Vojtěch Kovařík and Jan Kulveit and Joel Z. Leibo and Caspar Oesterheld and Christian Schroeder de Witt and Nisarg Shah and Michael Wellman and Paolo Bova and Theodor Cimpeanu and Carson Ezell and Quentin Feuillade-Montixi and Matija Franklin and Esben Kran and Igor Krawczuk and Max Lamparth and Niklas Lauffer and Alexander Meinke and Sumeet Motwani and Anka Reuel and Vincent Conitzer and Michael Dennis and Iason Gabriel and Adam Gleave and Gillian Hadfield and Nika Haghtalab and Atoosa Kasirzadeh and Sébastien Krier and Kate Larson and Joel Lehman and David C. Parkes and Georgios Piliouras and Iyad Rahwan},
  institution  = {Cooperative AI Foundation},
  title        = {Multi-Agent Risks from Advanced AI},
  year         = {2025},
  number       = {1},
}

February 20, 2025

Lewis Hammond
Research Director