In this area, we would like to see proposals for work on security challenges that arise as advanced and interacting AI systems become more widespread throughout society. Often these challenges will overlap somewhat with our other research areas.
Specific Work We Would Like to Fund
Assessing what security vulnerabilities advanced multi-agent systems have that single-agent systems do not, and developing defence strategies for these vulnerabilities, such as improvements in network design, communication protocol design and/or information security.
Exploring how combinations of multiple AI systems can overcome existing safeguards for individual systems (and how they can be prevented from doing so).
Better understanding how robust cooperation is to adversarial attacks (for example, the injection of a small number of malicious agents, or the corruption of key data) in different settings.
Key Considerations
We prioritise funding work that we believe is unlikely to happen without our support. This means that projects with a significant commercial value, or which could otherwise be expected to be developed by the private sector, are unlikely to be funded by us.