There’s a model for governing AI. Here it is.


Article by Jacinda Ardern: “…On March 15, 2019, a terrorist took the lives of 51 members of New Zealand’s Muslim community in Christchurch. The attacker livestreamed his actions for 17 minutes, and the images found their way onto social media feeds all around the planet. Facebook alone blocked or removed 1.5 million copies of the video in the first 24 hours; in that timeframe, YouTube measured one upload per second.

Afterward, New Zealand was faced with a choice: accept that such exploitation of technology was inevitable or resolve to stop it. We chose to take a stand.

We had to move quickly. The world was watching our response and that of social media platforms. Would we regulate in haste? Would the platforms recognize their responsibility to prevent this from happening again?

New Zealand wasn’t the only nation grappling with the connection between violent extremism and technology. We wanted to create a coalition and knew that France had started to work in this space — so I reached out, leader to leader. In my first conversation with President Emmanuel Macron, he agreed there was work to do and said he was keen to join us in crafting a call to action.

We asked industry, civil society and other governments to join us at the table to agree on a set of actions we could all commit to. We could not use existing structures and bureaucracies because they weren’t equipped to deal with this problem.

Within two months of the attack, we launched the Christchurch Call to Action, and today it has more than 120 members, including governments, online service providers and civil society organizations — united by our shared objective to eliminate terrorist and other violent extremist content online and uphold the principle of a free, open and secure internet.

The Christchurch Call is a large-scale collaboration, vastly different from most top-down approaches. Leaders meet annually to confirm priorities and identify areas of focus, allowing the project to act dynamically. And the Call Secretariat — made up of officials from France and New Zealand — convenes working groups and undertakes diplomatic efforts throughout the year. All members are invited to bring their expertise to solve urgent online problems.

While this multi-stakeholder approach isn’t always easy, it has created change. We have bolstered the power of governments and communities to respond to attacks like the one New Zealand experienced. We have created new crisis-response protocols — which enabled companies to stop the 2022 Buffalo attack livestream within two minutes and quickly remove footage from many platforms. Companies and countries have enacted new trust and safety measures to prevent livestreaming of terrorist and other violent extremist content. And we have strengthened the industry-founded Global Internet Forum to Counter Terrorism with dedicated funding, staff and a multi-stakeholder mission.

We’re also taking on some of the more intransigent problems. The Christchurch Call Initiative on Algorithmic Outcomes, a partnership with companies and researchers, was intended to provide better access to the kind of data needed to design online safety measures to prevent radicalization to violence. In practice, it has much wider ramifications, enabling us to reveal more about the ways in which AI and humans interact.

From its start, the Christchurch Call anticipated the emerging challenges of AI and carved out space to address emerging technologies that threaten to foment violent extremism online. The Christchurch Call is actively tackling these AI issues.

Perhaps the most useful thing the Christchurch Call can add to the AI governance debate is the model itself. It is possible to bring companies, government officials, academics and civil society together not only to build consensus but also to make progress. It’s possible to create tools that address the here and now and also position ourselves to face an unknown future. We need this to deal with AI…(More)”.