Article by Stefaan G. Verhulst: “With the UK Summit in full swing, 2023 will likely be seen as a pivotal year for AI governance, with governments promoting a global governance model: AI Globalism. For it to be relevant, flexible, and effective, any global approach will need to be informed by and complemented with local experimentation and leadership, ensuring local responsiveness: AI Localism.
Even as consumers and businesses extend their use of AI (generative AI in particular), governments are also taking notice. Determined not to be caught on the back foot, as they were with social media, regulators and policymakers around the world are exploring frameworks and institutional structures that could help maximize the benefits while minimizing the potential harms of AI. This week, the UK is hosting a high-profile AI Safety Summit, attended by political and business leaders from around the world, including Kamala Harris and Elon Musk. Similarly, US President Biden recently signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which he hailed as a “landmark executive order” to ensure “safety, security, trust, openness, and American leadership.”
Amid the various policy and regulatory proposals swirling around, there has been a notable emphasis on what we might call AI globalism. The UK summit has explicitly endorsed a global approach to AI safety, with coordination between the US, EU, and China core to its vision of more responsible and safe AI. This global perspective follows similar recent calls for “an AI equivalent of the IPCC” or the International Atomic Energy Agency (IAEA). Notably, such calls are emerging both from the private sector and from civil society leaders.
In many ways, a global approach makes sense. Like most technology, AI is transnational in scope, and its governance will require cross-jurisdictional coordination and harmonization. At the same time, we believe that AI globalism should be accompanied by a recognition that some of the most innovative AI initiatives are taking place in cities and municipalities and being regulated at those levels too.
We call it AI localism. In what follows, I outline a vision of a more decentralized approach to AI governance, one that would allow cities and local jurisdictions — including states — to develop and iterate governance frameworks tailored to their specific needs and challenges. This decentralized, local approach would need to take place alongside global efforts. The two would not be mutually exclusive but instead necessarily complementary…(More)”.