The Cooperative AI Foundation's 2024 Strategy

The Cooperative AI Foundation's strategy for 2024 shares our priorities and plans for the year. We welcome feedback and from the community and opportunities for collaboration.

CAIF’s strategy sets out how we aim to deliver our mission of supporting research and practice that will improve the cooperative intelligence of advanced AI for the benefit of all. We welcome comments on our approach, which we expect to evolve quite quickly in response to a rapidly changing AI context. Managing uncertainty is at the heart of any strategy, and this even more true for a strategy to help build a field as new as cooperative AI. At this early stage we need to invest in a broad portfolio of activities, to learn what works well, and then adjust the balance of our work to make bolder bets on promising solutions.

Building this field involves a lot of direct support for research. Our theory of change also emphasises the need to overcome the limited attention given to cooperative AI in academia, in the companies developing transformational AI systems, and in governments that will oversee the deployment of AI systems across society. Why do we think this is so important?

Everything humankind has achieved has been built through cooperation. What sets us apart from other species is not just intelligence, but cooperative intelligence. Our agriculture, our medicine, our infrastructure, our social institutions, our digital technologies all arise through cooperation, and take cooperation to new levels. At the same time, many of the world’s most important problems, such as war and climate change, are rooted in cooperation failure.

Cooperative AI is the study and development of cooperative intelligence in advanced AI systems. People are already beginning to work with AI to accomplish goals that were formerly the work of human teams. Yet systematic work on cooperation between AI agents, and between AI and people, is still in its infancy.

We are seeing the early development of a global ecosystem embedding many different kinds of AI tools within critical human systems. These will increasingly involve AI agents taking actions in the real world. Yet only a tiny fraction of the world’s AI research capacity has been applied to understanding, stress-testing, and improving interactions between AI systems. Multi-agent interactions can give rise to failure modes that are qualitatively different from other AI safety considerations: even systems that are perfectly safe on their own may contribute to harm through their interaction with others.

At the same time, cooperative AI offers an opportunity to enable new kinds of cooperation, mediated via increasingly ubiquitous AI assistants and services. Such systems could help us to better manage the vast amounts of information, the plurality of values and preferences, and the many uncertain consequences that often plague our attempts to coordinate. In so doing, AI might eventually play a role in improving our institutions and more directly in overcoming many of the other most critical coordination challenges of our time, ranging from international conflict to climate change.

There are important differences between machines and humans that affect their ability to work well with others. CAIF and our grantees study these differences, seeking to understand and control cooperation in the context of advanced AI. (We say “control” rather than “encourage” because sometimes cooperation between AI systems might be undesirable.) We aim to bring interdisciplinary understanding of cooperation into the design of relationships among AI agents, and between AI and people.

CAIF believes this is one of the most important and neglected issues of our time. We are eager to join our efforts with those of others working on related issues of cooperation: we look forward to hearing from you.

December 6, 2024

David Norman
Managing Director
Lewis Hammond
Research Director