In this area, we would like to see work that studies what can go wrong when we don’t want or expect AI agents to work together. Collusion (undesired cooperation) between agents could for example lead them to bypass safeguards or laws. We believe work in this area will be important for monitoring and governance as deployment of advanced and interacting AI systems becomes more widespread throughout society.