← Back to Overview
Multi-Agent Security
What It Is and Why It Matters

In this area, we would like to see proposals for work on security challenges that arise as advanced and interacting AI systems become more widespread throughout society. Often these challenges will overlap somewhat with our other research areas.

Specific Work We Would Like to Fund
  • Assessing what security vulnerabilities advanced multi-agent systems have that single-agent systems do not, and developing defence strategies for these vulnerabilities, such as improvements in network design, communication protocol design and/or information security.
  • Exploring how combinations of multiple AI systems can overcome existing safeguards for individual systems (and how they can be prevented from doing so).
  • Better understanding how robust cooperation is to adversarial attacks (for example, the injection of a small number of malicious agents, or the corruption of key data) in different settings.
Key Considerations
  • We prioritise funding work that we believe is unlikely to happen without our support. This means that projects with a significant commercial value, or which could otherwise be expected to be developed by the private sector, are unlikely to be funded by us.
References
Priority Research Areas
Understanding and Evaluating Cooperation-Relevant Propensities (High Priority)
Understanding and Evaluating Cooperation-Relevant Propensities (High Priority)
Understanding and Evaluating Cooperation-Relevant Capabilities (High Priority)
Understanding and Evaluating Cooperation-Relevant Capabilities (High Priority)
Incentivizing Cooperation Among AI Agents
Incentivizing Cooperation Among AI Agents
AI for Facilitating Human Cooperation
AI for Facilitating Human Cooperation
Monitoring and Controlling Dynamic Networks of Agents and Emergent Properties
Monitoring and Controlling Dynamic Networks of Agents and Emergent Properties
Information Asymmetries and Transparency
Information Asymmetries and Transparency
Secondary Research Areas
No items found.