Cooperative AI Workshops

NeurIPS 2021

Aims and Focus
The second Cooperative AI workshop is focused on how to incentivize cooperation in AI systems, and how to implement effective coordination given these incentives.

The human ability to cooperate in a wide range of contexts is a key ingredient in the success of our species. Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at every scale, from the daily routines of highway driving, communicating in shared language and work collaborations, to the global challenges of climate change, pandemic preparedness and international trade.

With AI agents playing an ever greater role in our lives, we must endow them with similar abilities. In particular they must understand the behaviors of others, find common ground by which to communicate with them, make credible commitments, and establish institutions which promote cooperative behavior. By construction, the goal of Cooperative AI is interdisciplinary in nature. Therefore, our workshop will bring together scholars from diverse backgrounds including reinforcement learning (and inverse RL), multi-agent systems, human-AI interaction, game theory, mechanism design, social choice, fairness, cognitive science, language learning, and interpretability. Our workshop will include a panel discussion with experts spanning these diverse communities.

This year we will organize the workshop along two axes. First, we will discuss how to incentivize cooperation in AI systems, developing algorithms that can act effectively in general-sum settings, and which encourage others to cooperate. Such systems are crucial for preventing disastrous outcomes (e.g. in traffic), and for achieving joint gains in interaction with other agents, human or machine (e.g. in bargaining problems). In the long run such systems may also provide improved incentive design mechanisms to help humans avoid unfavorable equilibria in real world settings.

The second focus is on how to implement effective coordination, given that cooperation is already incentivized. Even in situations where everyone agrees to cooperate, it is still very difficult to establish and perpetuate the common conventions, language and division of labour necessary to carry out a cooperative act. For example, we may examine zero-shot coordination, in which AI agents need to coordinate with novel partners at test time. This setting is highly relevant to human-AI coordination, and provides a stepping stone for the community towards full Cooperative AI.

Mentorship Program


The Cooperative AI mentorship program will pair a junior researcher who plans to submit a paper to the workshop with a senior researcher with expertise that could benefit the paper. The senior researcher will provide feedback on the research method and paper. The suggested format is for the mentor and mentee to exchange information via email and meet to discuss the work via conference call.

Sign-up link for mentees
Sign-up link for mentors

We will match mentors and mentees on a rolling basis, so please sign up ASAP.   Any junior researcher (< 3 years of PhD) is welcome to apply, but we will prioritize matching those from underrepresented groups if there are not enough mentors. Once we match a mentor and mentee, we will reach out to connect them via the contact details provided via the sign-up form.

Key Dates

September 25: Paper Submission Deadline
October 26: Final Decision
November 24: Camera Ready Deadline
December 1: Workshop Poster Deadline
December 14: Workshop

Papers

Best Papers

Interactive Inverse Reinforcement Learning for Cooperative Games
Thomas Kleine Buening, Anne-Marie George, Christos Dimitrakakis
Learning to solve complex tasks by growing knowledge culturally across generations
Michael Henry Tessler, Jason Madeano, Pedro Tsividis, Noah Goodman, Joshua B. Tenenbaum
On the Approximation of Cooperative Heterogeneous Multi-Agent Reinforcement Learning (MARL) using Mean Field Control (MFC)
Washim Uddin Mondal, Mridul Agarwal, Vaneet Aggarwal, Satish Ukkusuri
Public Information Representation for Adversarial Team Games
Luca Carminati, Federico Cacciamani, Marco Ciccone, Nicola Gatti

Accepted Papers

Show more

A Fine-Tuning Approach to Belief State Modeling

Samuel Sokota, Hengyuan Hu, David J Wu, Jakob Nicolaus Foerster, Noam Brown

A taxonomy of strategic human interactions in traffic conflicts

Atrisha Sarkar, Kate Larson, Krzysztof Czarnecki

Ambiguity Can Compensate for Semantic Differences in Human-AI Communication

Özgecan Koçak, Sanghyun Park, Phanish Puranam

Automated Configuration and Usage of Strategy Portfolios for Bargaining

Bram M. Renting, Holger Hoos, Catholijn M Jonker

Bayesian Inference for Human-Robot Coordination in Parallel Play

Shray Bansal, Jin Xu, Ayanna Howard, Charles Lee Isbell

Causal Multi-Agent Reinforcement Learning: Review and Open Problems

St John Grimbly, Jonathan Phillip Shock, Arnu Pretorius

Coalitional Bargaining via Reinforcement Learning: An Application to Collaborative Vehicle Routing

Stephen Mak, Liming Xu, Tim Pearce, Michael Ostroumov, Alexandra Brintrup

Coordinated Reinforcement Learning for Optimizing Mobile Networks

Maxime Bouton, Hasan Farooq, Julien Forgeat, Shruti Bothe, Meral Shirazipour, Per Karlsson

Disinformation, Stochastic Harm, and Costly Effort: A Principal-Agent Analysis of Regulating Social Media Platforms

Shehroze Khan, James R. Wright

Fool Me Three Times: Human-Robot Trust Repair & Trustworthiness Over Multiple Violations and Repairs

Connor Esterwood, Lionel Robert

Generalized Belief Learning in Multi-Agent Settings

Darius Muglich, Luisa M Zintgraf, Christian Schroeder de Witt, Shimon Whiteson, Jakob Nicolaus Foerster

Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria

Kavya Kopparapu, Edgar A. Duéñez-Guzmán, Jayd Matyas, Alexander Sasha Vezhnevets, John P Agapiou, Kevin R. McKee, Richard Everett, Janusz Marecki, Joel Z Leibo, Thore Graepel

I Will Have Order! Optimizing Orders for Fair Reviewer Assignment

Justin Payan, Yair Zick

Learning Collective Action under Risk Diversity

Ramona Merhej, Fernando P. Santos, Francisco S. Melo, Mohamed CHETOUANI, Francisco C. Santos

Locality Matters: A Scalable Value Decomposition Approach for Cooperative Multi-Agent Reinforcement Learning

Roy Zohar, Shie Mannor, Guy Tennenholtz

Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination

Rui Zhao, Jinming Song, Hu Haifeng, Yang Gao, Yi Wu, Zhongqian Sun, Yang Wei

Modular Design Patterns for Hybrid Actors

André Meyer-Vitali, Wico Mulder, Maaike de Boer

Multi-lingual agents through multi-headed neural networks

Jonathan David Thomas, Raul Santos-Rodriguez, Robert Piechocki, Mihai Anca

Normative disagreement as a challenge for Cooperative AI

Julian Stastny, Maxime Nicolas Riché, Alexander Lyzhov, Johannes Treutlein, Allan Dafoe, Jesse Clifton

On Agent Incentives to Manipulate Human Feedback in Multi-Agent Reward Learning Scenarios

Francis Rhys Ward

On the Importance of Environments in Human-Robot Coordination

Matthew Christopher Fontaine, Ya-Chuan Hsu, Yulun Zhang, Bryon Tjanaka, Stefanos Nikolaidis

On-the-fly Strategy Adaptation for ad-hoc Agent Coordination

Jaleh Zand, Jack Parker-Holder, Stephen J. Roberts

PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration

Pengyi Li, Hongyao Tang, Tianpei Yang, Xiaotian Hao, Sang Tong, YAN ZHENG, Jianye HAO, Matthew E. Taylor, Jinyi Liu

Preprocessing Reward Functions for Interpretabilit

Erik Jenner, Adam Gleave

Promoting Resilience in Multi-Agent Reinforcement Learning via Confusion-Based Communication

Ofir Abu, Matthias Gerstgrasser, Jeffrey Rosenschein, Sarah Keren

Reinforcement Learning Under Algorithmic Triage

Eleni Straitouri, Adish Singla, Vahid Balazadeh Meresht, Manuel Gomez Rodriguez

The challenge of redundancy on multi agent value factorisation

Siddarth Singh, Benjamin Rosman

The Evolutionary Dynamics of Soft-Max PolicyGradient in Multi-Agent Settings

Martino Bernasconi de Luca, Federico Cacciamani, Simone Fioravanti, Nicola Gatti, Francesco Trovò

The Power of Communication in a Distributed Multi-Agent System

Philipp Dominic Siedler

Towards Incorporating Rich Social Interactions Into MDPs

Ravi Tejwani, Yen-Ling Kuo, Tianmin Shu, Bennett Stankovits, Dan Gutfreund, Joshua B. Tenenbaum, Boris Katz, Andrei Barbu

When Humans Aren’t Optimal: Robots that Collaborate with Risk-Aware Humans

Minae Kwon, Erdem Biyik, Aditi Talati, Karan Bhasin, Dylan Losey, Dorsa Sadigh

Call for Papers

We invite high-quality paper submissions on the following topics (broadly construed, this is not an exhaustive list).

Agent cooperation / Multi-agent communication / Team formation, trust, and reputation / Negotiation and bargaining / Resolving commitment problems / Agent societies, organizations, and institutions / Equilibrium computation / Markets, mechanism design, and economic cooperation / Multi-agent learning / Multi-agent and Human-AI coordination (including zero-shot) / Human cooperation, theory of mind, peer modeling, and social cognition

Accepted papers will be presented during joint virtual poster sessions and be made publicly available as non-archival reports, allowing future submissions to archival conferences or journals.

Submissions should be up to eight pages excluding references, acknowledgements, and supplementary material, and should follow NeurIPS format. The review process will be double-blind to avoid potential conflicts of interests.

Keynote Speakers

Ariel Procaccia
Professor, Harvard University

Ariel Procaccia is Gordon McKay Professor of Computer Science at Harvard University. He works on a broad and dynamic set of problems related to AI, algorithms, economics, and society. His distinctions include the Social Choice and Welfare Prize (2020), a Guggenheim Fellowship (2018), the IJCAI Computers and Thought Award (2015), and a Sloan Research Fellowship (2015). To make his research accessible to the public, he has founded the not-for-profit website Spliddit.org and he regularly contributes opinion pieces.

Bo An
Associate Professor, Nanyang Technological University

Bo An is a President’s Council Chair Associate Professor at Nanyang Technological University, Singapore. His current research interests include artificial intelligence, multiagent systems, computational game theory, reinforcement learning, and optimization. Dr. An received many awards including INFORMS Daniel H. Wagner Prize for Excellence in Operations Research Practice. He led the team HogRider which won the 2017 Microsoft Collaborative AI Challenge. He was named to IEEE Intelligent Systems' "AI's 10 to Watch" list for 2018.

Dorsa Sadigh
Assistant Professor, Stanford University

Dorsa Sadigh is an assistant professor in Computer Science and Electrical Engineering at Stanford University.  Her research interests lie in the intersection of robotics, learning, and control theory. Specifically, she is interested in developing algorithms for safe and adaptive human-robot and multi-agent interaction. Dorsa received her doctoral degree in Electrical Engineering and Computer Sciences (EECS) from UC Berkeley in 2017, and received her bachelor’s degree in EECS from UC Berkeley in 2012.  She is recognized by awards such as the NSF CAREER award, the AFOSR Young Investigator award, the IEEE TCCPS early career award, MIT TR35, as well as industry awards such as the JP Morgan, Google, and Amazon faculty research awards.

Michael Muthukrishna
Associate Professor, London School of Economics

Michael Muthukrishna is Associate Professor of Economic Psychology and STICERD Developmental Economics Group Affiliate at the London School of Economics, CIFAR Azrieli Global Scholar at the Canadian Institute for Advanced Research, and Technical Director of The Database of Religious History (religiondatabase.org). His research focuses on human biological and cultural evolution, how this understanding of human behavior and social change can improve innovation, reduce corruption, and increase cross-cultural cooperation. His work is featured in international and national news outlets including CNN, BBC, Wall Street Journal, The Economist, Scientific American, Nature News, and Science News, and in the UK in the Times, Telegraph, Mirror, Sun, and Guardian. Michael's research is informed by his educational background in engineering and psychology, with graduate training in evolutionary biology, economics, and statistics, and his personal background living in Sri Lanka, Botswana, Papua New Guinea, Australia, Canada, United States, and United Kingdom. He is currently working on a book to be published with MIT Press.

Nika Haghtalab
Assistant Professor, UC Berkeley

Nika Haghtalab is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. She works broadly on the theoretical aspects of machine learning and algorithmic economics. Prof. Haghtalab's work builds theoretical foundations for ensuring both the performance of learning algorithms in presence of everyday economic forces and the integrity of social and economic forces that are born out of the use of machine learning systems.

Previously, Prof. Haghtalab was an Assistant Professor in the CS department of Cornell University, in 2019-2020. She received her Ph.D. from the Computer Science Department of Carnegie Mellon University. She is a co-founder of Learning Theory Alliance (LeT-All). Among her honors are the CMU School of Computer Science Dissertation Award, SIGecom Dissertation Honorable Mention, and other industry research awards.

Pablo Samuel Castro
Staff Research Software Developer, Google Brain

Pablo Samuel Castro was born and raised in Quito, Ecuador, and moved to Montreal after high school to study at McGill, eventually obtaining his masters and PhD at McGill, focusing on Reinforcement Learning. He is currently a staff research Software Developer in Google Research (Brain team) in Montreal, focusing on fundamental Reinforcement Learning research, Machine Learning and Creativity, and being a regular advocate for increasing the LatinX representation in the research community. He is also an active musician.

Cooperative AI Panel

‍Allan Dafoe
Senior Staff Research Scientist, DeepMind
President, Centre for the Governance of AI

Allan Dafoe is a Senior Staff Research Scientist and lead of the Long-term Strategy and Governance team at DeepMind; Allan is also President of the Centre for the Governance of AI and Trustee of the Cooperative AI Foundation. He was previously on faculty at the University of Oxford and Yale University, with a background in political science. Allan’s work aims to map and prepare for the potential opportunities and risks from advanced AI, so as to help steer the development of AI for the benefit of all humanity.

Christopher Amato
Assistant Professor, Northeastern University

Chris Amato is an assistant professor in the Khoury College of Computer Sciences at Northeastern University. His research is at the intersection of artificial intelligence, machine learning and robotics. Amato currently heads the Lab for Learning and Planning in Robotics, where his team works on planning and reinforcement learning in partially observable and multi-agent/multi-robot systems.

Before joining Northeastern, he worked as a research scientist at Aptima Inc., a research scientist and postdoctoral fellow at MIT, and an assistant professor at the University of New Hampshire. Amato received his bachelor’s from Tufts University and his master’s and doctorate from the University of Massachusetts, Amherst.

Amato is widely published in leading artificial intelligence, machine learning, and robotics conferences. He is the recipient of a best paper prize at AAMAS-14 and was nominated for the best paper at RSS-15, AAAI-19, and AAMAS-21. Amato has also successfully co-organized several tutorials on multi-agent planning and learning and has co-authored a book on the subject.

Elizabeth M Adams
Chief AI Ethics Advisor, Stanford Institute for Human-Centered Artificial Intelligence

As one of Forbes "15 AI Ethics Leaders Showing The World The Way Of The Future", Elizabeth M. Adams is a highly-sought-after resource in business and professional circles for executives, small business owners, non-profits, institutions of higher learning and community leaders from all sectors of society, looking to expand their knowledge of AI Ethics and Leadership of Responsible AI™. In December of 2019, Elizabeth was awarded the inaugural 2020 Race & Technology Practitioner Fellowship by Stanford University's Center for Comparative Studies in Race & Ethnicity. In August of 2021 she was awarded Affiliate Fellow status with Stanford's Institute for Human-Centered AI, a 2-year appointment. Elizabeth is pursuing a doctoral degree at Pepperdine University with a research focus on Leadership of Responsible AI™.  Elizabeth serves as the Global Chief AI Culture & Ethics Officer for Women in AI where she volunteers her time building a world class team and program to support the needs of 8,000 women around the world.

Fei Fang
Assistant Professor, Carnegie Mellon University

Fei Fang is Leonardo Assistant Professor at the Institute for Software Research in the School of Computer Science at Carnegie Mellon University. Before joining CMU, she was a Postdoctoral Fellow at the Center for Research on Computation and Society (CRCS) at Harvard University, hosted by David Parkes and Barbara Grosz. She received her Ph.D. from the Department of Computer Science at the University of Southern California advised by Milind Tambe (now at Harvard). Her research lies in the field of artificial intelligence and multi-agent systems, focusing on integrating machine learning with game theory. Her work has been motivated by and applied to security, sustainability, and mobility domains, contributing to the theme of AI for Social Good. She is the recipient of the IJCAI-21 Computers and Thought Award. She was named to IEEE Intelligent Systems’ “AI’s 10 to Watch” list for 2020. She received an NSF CAREER Award in 2021.

Schedule

Programme Details

The workshop will feature invited talks by researchers from diverse disciplines and backgrounds, ranging from AI and machine learning to political science, economics, and law. We will have a virtual poster session for presenting work submitted to the workshop. Poster sessions will be hosted in GatherTown for participants to join the conversation and chat to the authors. We have allotted a slot for Spotlight talks, which will be mostly dedicated to junior researchers. We intend to have a panel discussion regarding the main open questions in Cooperative AI, stimulating future research in this space. We hope that bringing together speakers from diverse fields and views will result in useful discussions and interactions, leading to novel ideas.

All sessions will be pre-recorded and available to view in advance, except for the Q&A, Poster Sessions, and Closing Remarks, which will take place live.

For the live Q&A sessions, participants should submit questions via the Sli.do – links are available the schedule below and on the NeurIPs workshop page.

Eastern Standard Time

Schedule

Time

Event

Speakers

8:20am

Welcome and Opening Remarks

Edward Hughes (DeepMind)

Natasha Jaques (Google Brain, UC Berkeley)

8:30am

Invited Talk

Bo An (Nanyang Technological University) on Learning to Coordinate in Complex Environments

9:00am

Invited Talk

Michael Muthukrishna (London School of Economics) on Cultural Evolution and Human Cooperation

9:30am

Invited Talk

Pablo Castro (Google Brain) on Estimating Policy Functions in Payment Systems using Reinforcement Learning

10:00am

(Live) Q&A with Invited Speakers

Moderated by Edward Hughes (DeepMind)

10:00am: Bo An (Nanyang Technological University)

10:15am: Michael Muthukrishna (London School of Economics)

10:30am: Pablo Castro (Google Brain)

10:45am

Invited Talk

Ariel Procaccia (Harvard University) on Democracy and the Pursuit of Randomness

11:15am

Invited Talk

Dorsa Sadigh (Stanford University) on The Role of Conventions in Adaptive Human-AI Interaction

11:45am

(Live) Invited Talk

Nika Haghtalab (UC Berkeley) on Collaborative Machine Learning: Training and Incentives

12:15pm

(Live) Q&A with Invited Speakers

Moderated by Noam Brown (Facebook AI Research)

12:15pm: Ariel Procaccia (Harvard University)

12:30pm: Dorsa Sadigh (Stanford University)

12:45pm: Nika Haghtalab (UC Berkeley)

1:00pm

Poster Session 1

Hosted in GatherTown

Link (NeurIPS registration required)

2:00pm

Poster Session 2

Hosted in GatherTown

Link (NeurIPS registration required)

3:00pm

(Live) Panel Discussion: Cooperative AI

Kalesha Bullard (moderator)

Allan Dafoe (DeepMind, Centre for the Governance of AI)

Fei Fang (Carnegie Mellon University)

Chris Amato (Northeastern University)

Elizabeth Adams (Stanford Center on Philanthropy and Civil Society)

4:00pm

Spotlight Talk | Interactive Inverse Reinforcement Learning for Cooperative Games

Thomas Kleine Buening

Anne-Marie George

Christos Dimitrakakis

4:15pm

Spotlight Talk | Learning to solve complex tasks by growing knowledge culturally across generations

Michael Henry Tessler

Jason Madeano

Pedro Tsividis

Noah Goodman

Joshua B. Tenenbaum

4:30pm

Spotlight Talk | On the Approximation of Cooperative Heterogeneous Multi-Agent Reinforcement Learning (MARL) using Mean Field Control (MFC)

Washim Uddin Mondal

Mridul Agarwal

Vaneet Aggarwal

Satish Ukkusuri

4:45pm

Spotlight Talk | Public Information Representation for Adversarial Team Games

Luca Carminati

Federico Cacciamani

Marco Ciccone

Nicola Gatti

5:00pm

(Live) Closing Remarks

Gillian Hadfield (Schwartz Reisman Institute for Technology and Society, University of Toronto)

Registration

Application for Registration Fee Assistance

Sponsorship funds are available to assist participants from underrepresented groups with registration fees.

Please complete this form if you wish to apply.

Organizers

Edward Hughes
Staff Research Engineer, Google DeepMind
Natasha Jaques
Assistant Professor, University of Washington
Senior Research Scientist, Google DeepMind
Jakob Foerster
Associate Professor, University of Oxford
Kalesha Bullard
Research Scientist, DeepMind
Noam Brown
Researcher, OpenAI

Reviewers

The workshop organizers wish to thank the following collaborators for their assistance with reviewing submitted papers.

No items found.

Sponsors

Cooperative AI Foundation

The mission of the Cooperative AI Foundation is to support research that will improve the cooperative intelligence of advanced AI for the benefit of all.

Visit sponsor website

Google DeepMind

Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe AI systems that learn how to solve problems and advance scientific discovery for all. We’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. We’ve always been fascinated by intelligence. It’s what gives us the ability to solve problems, find creative ideas, and make the scientific breakthroughs that built the world we live in today. So many discoveries await us, and so many challenges to our wellbeing and environment are yet to be solved. Like the Hubble telescope that helps us see deeper into space, we aim to build advanced AI – sometimes known as Artificial General Intelligence (AGI) – to expand our knowledge and find new answers. By solving this, we believe we could help people solve thousands of problems.

Visit sponsor website

Schwartz Reisman Institute for Technology and Society

The Schwartz Reisman Institute for Technology and Society (SRI) was established through a generous gift from Canadian entrepreneurs Gerald Schwartz and Heather Reisman in 2019. SRI is a research and solutions hub within the University of Toronto dedicated to ensuring that powerful technologies like artificial intelligence are safe, fair, ethical, and make the world better—for everyone. SRI devel­ops new modes of thinking in order to better understand the social implications of technologies in the present age, and works to reinvent laws, institutions, and social values to ensure that technol­ogy is designed, governed, and deployed to deliver a more just and inclusive world. SRI researchers range in fields from law to computer science, engineering, philosophy, political science, and beyond. SRI draws on world-class expertise across univer­sities, government, industry, and community organizations to unite fundamental research on emerging technologies with actionable solutions for public policy, law, the private sector, and citizens alike.

Visit sponsor website

Resources

Video Playlist