Cooperative AI Workshops

NeurIPS 2020

Aims and Focus
The first Cooperative AI workshop plans to bring together scholars from diverse backgrounds to discuss how AI research can contribute to the field of cooperation.

Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at all scales ranging from our daily routines—such as highway driving, communication via shared language, division of labor, and work collaborations—to our global challenges—such as disarmament, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate, in our social intelligence and skills. Since machines powered by artificial intelligence and machine learning are playing an ever greater role in our lives, it will be important to equip them with the skills necessary to cooperate and to foster cooperation.

We see an opportunity for the field of AI, and particularly machine learning, to explicitly focus effort on this class of problems which we term Cooperative AI. The goal of this research would be to study the many aspects of the problem of cooperation, and innovate in AI to contribute to solving these problems. Central questions include how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design and mediation.

Such research could be organized around key capabilities necessary for cooperation, including: understanding other agents, communicating with other agents, constructing cooperative commitments, and devising and negotiating suitable bargains and institutions. In the context of machine learning, it will be important to develop training environments, tasks, and domains in which cooperative skills are crucial to success, learnable, and non-trivial. Work on the fundamental question of cooperation is by necessity interdisciplinary and will draw on a range of fields, including reinforcement learning (and inverse RL), multi-agent systems, game theory, mechanism design, social choice, language learning, and interpretability. This research may even touch upon fields like trusted hardware design and cryptography to address problems in commitment and communication.

Since artificial agents will often act on behalf of particular humans and in ways that are consequential for humans, this research will need to consider how machines can adequately learn human preferences, and how best to integrate human norms and ethics into cooperative arrangements. Research should also study the potential downsides of cooperative skills—such as exclusion, collusion, and coercion—and how to channel cooperative skills to most improve human welfare. Overall, this research would connect machine learning research to the broader scientific enterprise, in the natural sciences and social sciences, studying the problem of cooperation, and to the broader social effort to solve coordination problems.

Key Dates

October 2: Paper Submission Deadline
October 30: Final Decisions
December 12: Workshop

Papers

Best Papers

Benefits of Assistance over Reward Learning
Rohin Shah, Pedro Freire, Neel Alex, Rachel Freedman, Dmitrii Krasheninnikov, Lawrence Chan, Michael Dennis, Pieter Abbeel, Anca Dragan and Stuart Russell
Learning Social Learning
Kamal Ndousse, Douglas Eck, Sergey Levine and Natasha Jaques
Too many cooks: Bayesian inference for coordinating multi-agent collaboration
Rose Wang, Sarah Wu, James Evans, Joshua Tenenbaum, David Parkes and Max Kleiman-Weiner
Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration
Xavier Puig, Tianmin Shu, Shuang Li, Zilin Wang, Josh Tenenbaum, Sanja Fidler and Antonio Torralba

Accepted Papers

Show more

Quantifying Differences in Reward Functions

Adam Gleave, Michael Dennis, Shane Legg, Stuart Russell and Jan Leike

Faster Algorithms for Optimal Ex-Ante Coordinated Collusive Strategies in Extensive-Form Zero-Sum Games

Gabriele Farina, Andrea Celli, Nicola Gatti, and Tuomas Sandholm

No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium

Andrea Celli, Alberto Marchesi, Gabriele Farina, Nicola Gatti

Multi-Agent Coordination through Signal Mediated Strategies

Federico Cacciamani, Andrea Celli, Marco Ciccone and Nicola Gatti

D3C: Reducing the Price of Anarchy in Multi-Agent Learning

Ian Gemp, Kevin McKee, Richard Everett, Edgar Duenez-Guzman, Yoram Bachrach, David Balduzzi and Andrea Tacchetti

Human-Agent Cooperation in Bridge Bidding

Edward Lockhart, Tom Eccles, Nolan Bard, Neil Burch and Sebastian Borgeaud

Newton Optimization on Helmholtz Decomposition for Continuous Games

Giorgia Ramponi and Marcello Restelli

Learning to Design Fair and Private Voting Rules

Farhad Mohsin, Ao Liu, Pin-Yu Chen, Francesca Rossi and Lirong Xia

Competing AI: How competition feedback affects machine learning

Antonio Ginart, Eva Zhang, Yongchan Kwon and James Zou

A Bayesian Account of Measures of Interpretability in Human-AI Interaction

Sarath Sreedharan, Anagha Kulkarni, Tathagata Chakraborti, David Smith and Subbarao Kambhampati

Delegation to autonomous agents promotes cooperation in collective-risk dilemmas

Elias Fernández Domingos, Inês Terrucha, Rémi Suchon, Jelena Grujić, Juan Carlos Burguillo, Francisco C. Santos and Tom Lenaerts

DERAIL: Diagnostic Environments for Reward And Imitation Learning

Pedro Freire, Adam Gleave, Sam Toyer and Stuart Russell

Expected Divergence Point of Plans in Ad Hoc Teamwork

William Macke, Reuth Mirsky and Peter Stone

Human-Level Performance in No-Press Diplomacy via Equilibrium Search

Jonathan Gray, Adam Lerer, Anton Bakhtin and Noam Brown

Learning Cooperative Solution Concepts From Voting Behavior: A Case Study on the Israeli Knesset

Omer Lev, Wei Lu, Alan Tsang and Yair Zick

Learning Robust Helpful Behaviors in Two-Player Cooperative Atari Environments

Paul Tylkin, Goran Radanovic and David Parkes

Modeling collaborative work in human computation

David Lee

Multi-Principal Assistance Games: Definition and Collegial Mechanisms

Arnaud Fickinger, Simon Zhuang, Andrew Critch, Dylan Hadfield-Menell and Stuart Russell

Polynomial-Time Computation of Optimal Correlated Equilibria in Two-Player Extensive-Form Games with Public Chance Moves and Beyond

Gabriele Farina and Tuomas Sandholm

Safe Pareto improvements for delegated game playing

Caspar Oesterheld and Vincent Conitzer

The impacts of known and unknown demonstrator irrationality on reward inference

Lawrence Chan, Andrew Critch and Anca Dragan

Why didn't you allocate this task to them? 'Negotiation-aware Task Allocation and Contrastive Explanation Generation

Zahra Zahedi, Sailik Sengupta and Subbarao Kambhampati

Call for Papers

We invite high-quality paper submissions on the following topics (broadly construed, this is not an exhaustive list).

Multi-agent learning / Agent cooperation / Agent communication / Resolving commitment problems / Agent societies, organizations, and institutions / Trust and reputation / Theory of mind and peer modelling / Markets, mechanism design, and economics-based cooperation / Negotiation and bargaining agents / Team formation problems

Accepted papers will be presented during joint virtual poster sessions and be made publicly available as non-archival reports, allowing future submissions to archival conferences or journals.

Submissions should be up to eight pages excluding references, acknowledgements, and supplementary material, and should follow NeurIPS format. The review process will be double-blind to avoid potential conflicts of interests.

Keynote Speakers

James Fearon
Professor, Stanford University

James D. Fearon is Theodore and Frances Geballe Professor in the School of Humanities and Sciences and Professor of Political Science at Stanford University. He has produced multiple field-changing works on international and domestic cooperation and conflict. A prominent survey of international relations scholars voted him in the top 10 scholars who have had the greatest influence on the field of International Relations in the past 20 years.

‍Gillian Hadfield
Director, Schwartz Reisman Institute for Technology and Society
Professor, University of Toronto

Gillian Hadfield is the inaugural Schwartz Reisman Chair in Technology and Society, Professor of Law, Professor of Strategic Management at the University of Toronto and holds a CIFAR AI Chair at the Vector Institute for Artificial Intelligence.  She is a Schmidt Sciences AI2050 Senior Fellow. She was the inaugural Director of the Schwartz Reisman Institute for Technology and Society from 2019 through 2023. Her research is focused on the study of human and machine normative systems; safety and governance for artificial intelligence (AI); and innovative design for legal and dispute resolution systems in advanced and developing market economies.  She has also long studied the markets for law, lawyers, and dispute resolution; and contract law and theory. She teaches Contracts and Governance of AI.

William Isaac
Research Scientist, DeepMind

William Isaac is a Research Scientist with DeepMind’s Ethics and Society Team, with a particular interest in ethical cooperation among both humans and agents. Prior to DeepMind, William served as an Open Society Foundations Fellow and Research Advisor for the Human Rights Data Analysis Group focusing on bias and fairness in machine learning systems. William’s prior research centering on deployments of machine learning in the US criminal justice system has been featured in publications such as Science, the New York Times, and the Wall Street Journal.

Sarit Kraus
Professor, Bar-Ilan University

Sarit Kraus is a Professor of Computer Science at Bar-Ilan University. Her research is focused on intelligent agents and multi-agent systems (including people and robots). She was awarded the IJCAI Computers and Thought Award, ACM SIGART Agents Research award, ACM Athena Lecturer, the EMET prize and was twice the winner of the IFAAMAS influential paper award. She is AAAI, ECCAI and ACM fellow and a recipient of the advanced ERC grant.

Peter Stone
Professor, UT Austin
Executive Director, Sony AI America

Peter Stone is founder and director of the Learning Agents Research Group (LARG) within the Artificial Intelligence Laboratory in the Department of Computer Science at The University of Texas at Austin. He is also Executive Director of Sony AI America and President of the International RoboCup Federation. Prof. Stone is interested in understanding how we can best create complete intelligent agents based on adaptation, interaction, and embodiment with research in machine learning, multiagent systems, and robotics.

Cooperative AI Panel

Kate Larson
Professor, University of Waterloo

Kate Larson is a professor in the Cheriton School of Computer Science at the University of Waterloo and is a Research Scientist at DeepMind. Kate is affiliated with the AI group and also currently holds a University Research Chair and the Pasupalak AI Fellowship. Kate is interested in issues that arise in settings where self-interested agents interact, where these agents may be AI-agents, humans, or a combination. In particular, she is interested in understanding how computational limitations influence strategic behavior in multiagent systems, as well as developing approaches to address computational issues which arise in practical applications of multiagent systems.

Natasha Jaques
Assistant Professor, University of Washington
Senior Research Scientist, Google DeepMind

Natasha Jaques is an Assistant Professor at the University of Washington and a Senior Research Scientist at Google Brain. Her research focuses on Social Reinforcement Learning in multi-agent and human-AI interactions. Natasha completed her PhD at the MIT Media Lab, where her thesis received the Outstanding PhD Dissertation Award from the Association for the Advancement of Affective Computing, and completed a postdoc at UC Berkeley. Her work has received Best Demo at NeurIPS, an honourable mention for Best Paper at ICML, Best of Collection in the IEEE Transactions on Affective Computing, and received several best paper awards at NeurIPS and AAAI workshops. She has interned at DeepMind, Google Brain, and was an OpenAI Scholars mentor. Her work has been featured in Science Magazine, MIT Technology Review, Quartz, IEEE Spectrum, Boston Magazine, and on CBC radio. Natasha earned her Masters degree from the University of British Columbia, and undergraduate degrees in Computer Science and Psychology from the University of Regina.

Jeffrey S. Rosenschein
Professor, Hebrew University of Jerusalem

Jeffrey S. Rosenschein is Director of the Multiagent Systems Research Group at The Hebrew University of Jerusalem, which made its mark in early work on game theory and mechanism design as applied to multiagent negotiation and planning. That research explored issues of cooperation and competition among agents, and the use of economic theory, voting theory, and game theory to establish appropriate foundations for Multiagent Systems (MAS). More recent work has touched on a variety of additional AI research areas, including computational social choice, search, planning, multiagent learning, reputation systems, and dynamic control. He is a Fellow of the Association for Computing Machinery, the Association for the Advancement of Artificial Intelligence, the European Association for Artificial Intelligence, and is also a recipient of the ACM/SIGART Autonomous Agents Research Award.

Mike Wooldridge
Professor, University of Oxford

Mike Wooldridge is Head of Department and Professor of Computer Science in the Department of Computer Science at the University of Oxford, and a Senior Research Fellow at Hertford College. Mike joined Oxford on 1 June 2012; before this he was for twelve years a Professor of Computer Science at the University of Liverpool. His main research interests are in the use of formal techniques of one kind or another for reasoning about multiagent systems and is particularly interested in the computational aspects of rational action in systems composed of multiple self-interested computational systems. His current research is at the intersection of logic, computational complexity, and game theory.

Schedule

Programme Details

The workshop will feature invited talks by researchers from diverse disciplines and backgrounds, ranging from AI and machine learning to political science, economics, and law. We will have a virtual poster session for presenting work submitted to the workshop. Poster sessions will be hosted in GatherTown for participants to join the conversation and chat to the authors. We have allotted a slot for Spotlight talks, which will be mostly dedicated to junior researchers. We intend to have a panel discussion regarding the main open questions in Cooperative AI, stimulating future research in this space. We hope that bringing together speakers from diverse fields and views will result in useful discussions and interactions, leading to novel ideas.

All sessions will be pre-recorded and available to view in advance, except for the Q&A, Poster Sessions, and Closing Remarks, which will take place live.

For the live Q&A sessions, participants should submit questions via the Sli.do – links are available the schedule below and on the NeurIPs workshop page.

Eastern Standard Time

Schedule

Time

Event

Speakers

8.20am

Welcome

Yoram Bachrach (DeepMind)

Gillian Hadfield (University of Toronto)

8.30am

Opening Remarks

Allan Dafoe (University of Oxford)‍

Thore Graepel (DeepMind)

9.00am

Keynote Talk – Ad Hoc Autonomous Agent Teams: Collaboration without Pre-coordination

Peter Stone (UT Austin, Sony AI America)

9.30am

Keynote Talk – The Normative Infrastructure of Cooperation

Gillian Hadfield (University of Toronto)

10.00am

Keynote Talk – Two Kinds of Cooperative AI Challenges: Game Play and Game Design

James Fearon (Stanford University)

10.30am

Keynote Talk – Agent-Human Collaboration and Learning for Improving Human Satisfaction

Sarit Kraus (Bar-Ilan University)

11.00am

Keynote Talk – Can Cooperation make AI (and Society) Fairer?

William Isaac (DeepMind)

11.30am

Live General Q&A – Open Problems in AI

Thore Graepel (DeepMind)

Yoram Bachrach (DeepMind)

Allan Dafoe (University of Oxford)

Natasha Jaques (Google Brain, UC Berkeley)

11.45am

Live Keynote Speaker Q&A

Gillian Hadfield (University of Toronto)

12.00pm

Live Keynote Speaker Q&A

William Isaac (DeepMind)

12.15pm

Live Keynote Speaker Q&A

Peter Stone (UT Austin, Sony AI America)

12.30pm

Live Keynote Speaker Q&A

Sarit Kraus (Bar-Ilan University)

12.45pm

Live Keynote Speaker Q&A

James Fearon (Stanford University)

1.00pm

Live Poster Sessions

Hosted in GatherTown

2.00pm

Panel Discussion

Kate Larson (DeepMind)

Natasha Jaques (Google Brain, UC Berkeley)

Jeffrey S. Rosenschein (Hebrew University of Jerusalem)

2.45pm

Spotlight Talk – Too many cooks: Bayesian inference for coordinating multi-agent collaboration

Authors: Rose Wang, Sarah Wu, James Evans, Joshua Tenenbaum, David Parkes and Max Kleiman-Weiner

3.00pm

Spotlight Talk – Learning Social Learning

Authors: Kamal Ndousse, Douglas Eck, Sergey Levine and Natasha Jaques

3.15pm

Spotlight Talk – Benefits of Assistance over Reward Learning

Authors: Rohin Shah, Pedro Freire, Neel Alex, Rachel Freedman, Dmitrii Krasheninnikov, Lawrence Chan, Michael Dennis, Pieter Abbeel, Anca Dragan and Stuart Russell

3.30pm

Spotlight Talk – Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration

Authors: Xavier Puig, Tianmin Shu, Shuang Li, Zilin Wang, Josh Tenenbaum, Sanja Fidler and Antonio Torralba

3.45pm

Live Closing Remarks

Eric Horvitz (Microsoft)

Registration

Application for Registration Fee Assistance

Sponsorship funds are available to assist participants from underrepresented groups with registration fees.

Please complete this form if you wish to apply.

Organizers

Thore Graepel
Research Lead, DeepMind
Chair of Machine Learning, University College London
‍Dario Amodei
CEO, Anthropic
Yoram Bachrach
Research Scientist, DeepMind
Vincent Conitzer
Professor, Carnegie Mellon University
Professor, University of Oxford
‍Allan Dafoe
Senior Staff Research Scientist, DeepMind
President, Centre for the Governance of AI
‍Gillian Hadfield
Director, Schwartz Reisman Institute for Technology and Society
Professor, University of Toronto
Eric Horvitz
Chief Scientific Officer, Microsoft
Sarit Kraus
Professor, Bar-Ilan University
Kate Larson
Professor, University of Waterloo

Reviewers

The workshop organizers wish to thank the following collaborators for their assistance with reviewing submitted papers.

Adam Gleave

Aditya Mahajan

Alan Tsang

Allan Dafoe

Amos Azaria

Andrea Celli

Ari Weinstein

Aviva Prins

Christopher Summerfield

David Parkes

Douwe Kiela

Edgar Duéñez-Guzmán

Edward Hughes

Gillian Hadfield

Ian Gemp

Ian Kash

Ivana Kajić

Jay Pavagadhi

Joel Leibo

Kalesha Bullard

Karl Tuyls

Kate Larson

Kelvin Xu

Kevin McKee

Kevin Waugh

Kory Mathewson

Laura Weidinger

Long Tran-Thanh

Micah Carroll

Natasha Jaques

Neil Burch

Noam Hazon

Omer Lev

Peter Sunehag

Raphael Koster

Reshef Meir

Richard Everett

Sriram Ganapathi Subramanian

Tal Kachman

Teddy Collins

Thomas Anthony

Thore Graepel

Tom Eccles

Tom McGrath

Travis LaCroix

Tyrone Strangway

Vincent Conitzer

Yair Zick

Yoram Bachrach

Sponsors

Google DeepMind

Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe AI systems that learn how to solve problems and advance scientific discovery for all. We’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. We’ve always been fascinated by intelligence. It’s what gives us the ability to solve problems, find creative ideas, and make the scientific breakthroughs that built the world we live in today. So many discoveries await us, and so many challenges to our wellbeing and environment are yet to be solved. Like the Hubble telescope that helps us see deeper into space, we aim to build advanced AI – sometimes known as Artificial General Intelligence (AGI) – to expand our knowledge and find new answers. By solving this, we believe we could help people solve thousands of problems.

Visit sponsor website

Knowledge 4 All Foundation

Knowledge 4 All Foundation (K4A) is the only UK machine learning focused not-for-profit and an advocate of AI applications for reaching the UNs Sustainable Development Goals (SDG), especially SDG4 “Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all”. K4A originated from the PASCAL Network, which was an EU funded Network of Excellence comprising some 1000 machine learning, statistical and optimization researchers that ran for approximately 10 years. It has been leading and building the international machine learning community, through its activities that culminated in today’s AI hype, including supporting the NIPS conference and its workshops for 10 years. It has a portfolio of 227 events, 116 projects, 46 challenges.

Visit sponsor website

The Partnership on AI

The Partnership on AI is the leading forum addressing the most important and difficult decisions on the future of AI. We are a non-profit that invites diverse voices into the process of technical governance, design, and deployment of AI technologies. Our essential Partners work together across industry, academia, and civil society to understand the implications of AI advancements and ensure they benefit society equitably. Through dialogue, insight, education, and guidance, PAI informs responsible AI solutions and identifies opportunities to address humanity’s pressing challenges.

Visit sponsor website

Resources

Video Playlist