← Back to Overview
Monitoring and Controlling Dynamic Networks of Agents and Emergent Properties
What It Is and Why It Matters

In this area, we would like to see proposals for work that can improve our understanding of some specific types of multi-agent dynamics involving advanced AI systems. This includes emergent phenomena (behaviours, goals and capabilities) that are not present in the individual agents or systems, but that arise specifically in the multi-agent system. We believe that this kind of work will be important to identify, monitor and mitigate new risks that arise as deployment of advanced and interacting AI systems becomes more widespread throughout society.

Specific Work We Would Like to Fund
  • Work on destabilising dynamics, which could aim to answer questions about the conditions under which multi-agent systems involving AI have undesirable dynamics and how such phenomena can be monitored and stabilised. Such work may cover aspects such as how the number of agents, their objectives, and the features of their environment might precipitate undesirable dynamics.
  • Work on prevention of correlated failures, that could arise due to similarities and shared vulnerabilities among agents in the multi-agent system. This could include work on the impact of AI agents learning from data generated by each other on shared vulnerabilities, correlated failure modes, and their ability to cooperate/collude.
  • Work on which network structures and interaction patterns lead to more robust or fragile networks of AI agents, and the development of tools for overseeing and controlling the dynamics and co-adaptation of networks of advanced AI agents. This might include ‘infrastructure for AI agents’ such as interaction protocols. 
  • Theoretical and empirical work on establishing the conditions under which unexpected and undesirable goals and capabilities might emerge from multiple AI agents, how robust such phenomena are, and how quickly they can occur. Comparisons across specific scenarios could help to establish conditions under which these emergent phenomena are more likely, such as the degree of competition, complementarity of agents, access to particular resources, or task features.
Key Considerations

We expect it to be challenging to do work in this area that will be informative about advanced AI systems. Proposals should explicitly aim for results that can be expected to generalise to large and complex systems consisting of advanced AI agents, and it is important that the phenomena studied can reasonably be expected to be significant for the societal impact of AI systems in the long term.

References
Priority Research Areas
Understanding and Evaluating Cooperation-Relevant Propensities (High Priority)
Understanding and Evaluating Cooperation-Relevant Propensities (High Priority)
Understanding and Evaluating Cooperation-Relevant Capabilities (High Priority)
Understanding and Evaluating Cooperation-Relevant Capabilities (High Priority)
Incentivizing Cooperation Among AI Agents
Incentivizing Cooperation Among AI Agents
AI for Facilitating Human Cooperation
AI for Facilitating Human Cooperation
Monitoring and Controlling Dynamic Networks of Agents and Emergent Properties
Monitoring and Controlling Dynamic Networks of Agents and Emergent Properties
Information Asymmetries and Transparency
Information Asymmetries and Transparency
Secondary Research Areas
No items found.