Cooperative AI: machines must learn to find common ground


Paper by Allan Dafoe et al in Nature: “Artificial-intelligence assistants and recommendation algorithms interact with billions of people every day, influencing lives in myriad ways, yet they still have little understanding of humans. Self-driving vehicles controlled by artificial intelligence (AI) are gaining mastery of their interactions with the natural world, but they are still novices when it comes to coordinating with other cars and pedestrians or collaborating with their human operators.

The state of AI applications reflects that of the research field. It has long been steeped in a kind of methodological individualism. As is evident from introductory textbooks, the canonical AI problem is that of a solitary machine confronting a non-social environment. Historically, this was a sensible starting point. An AI agent — much like an infant — must first master a basic understanding of its environment and how to interact with it.

Even in work involving multiple AI agents, the field has not yet tackled the hard problems of cooperation. Most headline results have come from two-player zero-sum games, such as backgammon, chess, Go and poker. Gains in these competitive examples can be made only at the expense of others. Although such settings of pure conflict are vanishingly rare in the real world, they make appealing research projects. They are culturally cherished, relatively easy to benchmark (by asking whether the AI can beat the opponent), have natural curricula (because students train against peers of their own skill level) and have simpler solutions than semi-cooperative games do.

AI needs social understanding and cooperative intelligence to integrate well into society. The coming years might give rise to diverse ecologies of AI systems that interact in rapid and complex ways with each other and with humans: on pavements and roads, in consumer and financial markets, in e-mail communication and social media, in cybersecurity and physical security. Autonomous vehicles or smart cities that do not engage well with humans will fail to deliver their benefits, and might even disrupt stable human relationships…(More)”