In this area, we would like to see proposals that address the question of how cooperation can be incentivized among self-interested AI agents in mixed-motive settings. We expect such work to be important in finding approaches that lead to societally beneficial outcomes when advanced AI agents with conflicting goals are deployed in the real world.
Opponent-shaping
Peer incentivization
Contracts and commitments
Scalable mechanism design