Why picking citizens at random could be the best way to govern the A.I. revolution


Article by Hélène Landemore, Andrew Sorota, and Audrey Tang: “Testifying before Congress last month about the risks of artificial intelligence, Sam Altman, the OpenAI CEO behind the massively popular large language model (LLM) ChatGPT, and Gary Marcus, a psychology professor at NYU famous for his positions against A.I. utopianism, both agreed on one point: They called for the creation of a government agency comparable to the FDA to regulate A.I. Marcus also suggested scientific experts should be given early access to new A.I. prototypes to be able to test them before they are released to the public.

Strikingly, however, neither of them mentioned the public, namely the billions of ordinary citizens around the world that the A.I. revolution, in all its uncertainty, is sure to affect. Don’t they also deserve to be included in decisions about the future of this technology?

We believe a global, democratic approach–not an exclusively technocratic one–is the only adequate answer to what is a global political and ethical challenge. Sam Altman himself stated in an earlier interview that in his “dream scenario,” a global deliberation involving all humans would be used to figure out how to govern A.I.

There are already proofs of concept for the various elements that a global, large-scale deliberative process would require in practice. By drawing on these diverse and complementary examples, we can turn this dream into a reality.

Deliberations based on random selection have grown in popularity on the local and national levels, with close to 600 cases documented by the OECD in the last 20 years. Their appeal lies in capturing a unique array of voices and lived experiences, thereby generating policy recommendations that better track the preferences of the larger population and are more likely to be accepted. Famous examples include the 2012 and 2016 Irish citizens’ assemblies on marriage equality and abortion, which led to successful referendums and constitutional change, as well as the 2019 and 2022 French citizens’ conventions on climate justice and end-of-life issues.

Taiwan has successfully experimented with mass consultations through digital platforms like Pol.is, which employs machine learning to identify consensus among vast numbers of participants. Digitally engaged participation has helped aggregate public opinion on hundreds of polarizing issues in Taiwan–such as regulating Uber–involving half of its 23.5 million people. Digital participation can also augment other smaller-scale forms of citizen deliberations, such as those taking place in person or based on random selection…(More)”.

Privacy-enhancing technologies (PETs)


Report by the Information Commissioner’s Office (UK): “This guidance discusses privacy-enhancing technologies (PETs) in detail. Read it if you have questions not answered in the Guide, or if you need a deeper understanding to help you apply PETs in practice.

The first part of the guidance is aimed at DPOs (data protection officers) and those with specific data protection responsibilities in larger organisations. It focuses on how PETs can help you achieve compliance with data protection law.

The second part is intended for a more technical audience, and for DPOs who want to understand more detail about the types of PETs that are currently available. It gives a brief introduction to eight types of PETs and explains their risks and benefits…(More)”.

Collective Intelligence to Co-Create the Cities of the Future: Proposal of an Evaluation Tool for Citizen Initiatives


Paper by Fanny E. Berigüete, Inma Rodriguez Cantalapiedra, Mariana Palumbo and Torsten Masseck: “Citizen initiatives (CIs), through their activities, have become a mechanism to promote empowerment, social inclusion, change of habits, and the transformation of neighbourhoods, influencing their sustainability, but how can this impact be measured? Currently, there are no tools that directly assess this impact, so our research seeks to describe and evaluate the contributions of CIs in a holistic and comprehensive way, respecting the versatility of their activities. This research proposes an evaluation system of 33 indicators distributed in 3 blocks: social cohesion, urban metabolism, and transformation potential, which can be applied through a questionnaire. This research applied different methods such as desk study, literature review, and case study analysis. The evaluation of case studies showed that the developed evaluation system well reflects the individual contribution of CIs to sensitive and important aspects of neighbourhoods, with a lesser or greater impact according to the activities they carry out and the holistic conception they have of sustainability. Further implementation and validation of the system in different contexts is needed, but it is a novel and interesting proposal that will favour decision making for the promotion of one or another type of initiative according to its benefits and the reality and needs of the neighbourhood…(More)”.

An algorithm intended to reduce poverty in Jordan disqualifies people in need


Article by Tate Ryan-Mosley: “An algorithm funded by the World Bank to determine which families should get financial assistance in Jordan likely excludes people who should qualify, according to an investigation published this morning by Human Rights Watch. 

The algorithmic system, called Takaful, ranks families applying for aid from least poor to poorest using a secret calculus that assigns weights to 57 socioeconomic indicators. Applicants say that the calculus is not reflective of reality, however, and oversimplifies people’s economic situation, sometimes inaccurately or unfairly. Takaful has cost over $1 billion, and the World Bank is funding similar projects in eight other countries in the Middle East and Africa. 

Human Rights Watch identified several fundamental problems with the algorithmic system that resulted in bias and inaccuracies. Applicants are asked how much water and electricity they consume, for example, as two of the indicators that feed into the ranking system. The report’s authors conclude that these are not necessarily reliable indicators of poverty. Some families interviewed believed the fact that they owned a car affected their ranking, even if the car was old and necessary for transportation to work. 

The report reads, “This veneer of statistical objectivity masks a more complicated reality: the economic pressures that people endure and the ways they struggle to get by are frequently invisible to the algorithm.”..(More)”.

Brazil launches participatory national planning process


Article by Tarson Núñez and Luiza Jardim: “At a time when signs of a crisis in democracy are prevalent around the world, the Brazilian government is seeking to expand and deepen the active participation of citizens in its decisions. The new administration of Luiz Inácio Lula da Silva believes that more democracy is needed to rebuild citizens’ trust in political processes. And it just launched one of its main initiatives, the Participatory Pluriannual Plan (PPA Participativo). The PPA sets the goals and objectives for Brazil over the following four years, and Lula is determined to not only allow but facilitate public participation in its development. 

On May 11, the federal government held the first state plenary for the Participatory PPA, an assembly open to all citizens, social movements and civil society organizations. Participants at the state plenaries are able to discuss proposals and deliberate on the government’s public policies. Over the next two months, government officials will travel to the capitals of the country’s 26 states as well as the federal district (the capital of Brazil) to listen to people present their priorities. If they prefer, people can also submit their suggestions through a digital platform (Decidim, accessible only to people in Brazil) or the Interconselhos Forum, which brings together various councils and civil society groups…(More)”.

Will Democracies Stand Up to Big Brother?


Article by Simon Johnson, Daron Acemoglu and Sylvia Barmack: “Rapid advances in AI and AI-enhanced surveillance tools have created an urgent need for international norms and coordination to set sensible standards. But with oppressive authoritarian regimes unlikely to cooperate, the world’s democracies should start preparing to play economic hardball…Fiction writers have long imagined scenarios in which every human action is monitored by some malign centralized authority. But now, despite their warnings, we find ourselves careening toward a dystopian future worthy of George Orwell’s 1984. The task of assessing how to protect our rights – as consumers, workers, and citizens – has never been more urgent.

One sensible proposal is to limit patents on surveillance technologies to discourage their development and overuse. All else being equal, this could tilt the development of AI-related technologies away from surveillance applications – at least in the United States and other advanced economies, where patent protections matter, and where venture capitalists will be reluctant to back companies lacking strong intellectual-property rights. But even if such sensible measures are adopted, the world will remain divided between countries with effective safeguards on surveillance and those without them. We therefore also need to consider the legitimate basis for trade between these emergent blocs.

AI capabilities have leapt forward over the past 18 months, and the pace of further development is unlikely to slow. The public release of ChatGPT in November 2022 was the generative-AI shot heard round the world. But just as important has been the equally rapid increase in governments and corporations’ surveillance capabilities. Since generative AI excels at pattern matching, it has made facial recognition remarkably accurate (though not without some major flaws). And the same general approach can be used to distinguish between “good” and problematic behavior, based simply on how people move or comport themselves.

Such surveillance technically leads to “higher productivity,” in the sense that it augments an authority’s ability to compel people to do what they are supposed to be doing. For a company, this means performing jobs at what management considers to be the highest productivity level. For a government, it means enforcing the law or otherwise ensuring compliance with those in power.

Unfortunately, a millennium of experience has established that increased productivity does not necessarily lead to improvements in shared prosperity. Today’s AI-powered surveillance allows overbearing managers and authoritarian political leaders to enforce their rules more effectively. But while productivity may increase, most people will not benefit…(More)”

There’s a model for governing AI. Here it is.


Article by Jacinda Ardern: “…On March 15, 2019, a terrorist took the lives of 51 members of New Zealand’s Muslim community in Christchurch. The attacker livestreamed his actions for 17 minutes, and the images found their way onto social media feeds all around the planet. Facebook alone blocked or removed 1.5 million copies of the video in the first 24 hours; in that timeframe, YouTube measured one upload per second.

Afterward, New Zealand was faced with a choice: accept that such exploitation of technology was inevitable or resolve to stop it. We chose to take a stand.

We had to move quickly. The world was watching our response and that of social media platforms. Would we regulate in haste? Would the platforms recognize their responsibility to prevent this from happening again?

New Zealand wasn’t the only nation grappling with the connection between violent extremism and technology. We wanted to create a coalition and knew that France had started to work in this space — so I reached out, leader to leader. In my first conversation with President Emmanuel Macron, he agreed there was work to do and said he was keen to join us in crafting a call to action.

We asked industry, civil society and other governments to join us at the table to agree on a set of actions we could all commit to. We could not use existing structures and bureaucracies because they weren’t equipped to deal with this problem.

Within two months of the attack, we launched the Christchurch Call to Action, and today it has more than 120 members, including governments, online service providers and civil society organizations — united by our shared objective to eliminate terrorist and other violent extremist content online and uphold the principle of a free, open and secure internet.

The Christchurch Call is a large-scale collaboration, vastly different from most top-down approaches. Leaders meet annually to confirm priorities and identify areas of focus, allowing the project to act dynamically. And the Call Secretariat — made up of officials from France and New Zealand — convenes working groups and undertakes diplomatic efforts throughout the year. All members are invited to bring their expertise to solve urgent online problems.

While this multi-stakeholder approach isn’t always easy, it has created change. We have bolstered the power of governments and communities to respond to attacks like the one New Zealand experienced. We have created new crisis-response protocols — which enabled companies to stop the 2022 Buffalo attack livestream within two minutes and quickly remove footage from many platforms. Companies and countries have enacted new trust and safety measures to prevent livestreaming of terrorist and other violent extremist content. And we have strengthened the industry-founded Global Internet Forum to Counter Terrorism with dedicated funding, staff and a multi-stakeholder mission.

We’re also taking on some of the more intransigent problems. The Christchurch Call Initiative on Algorithmic Outcomes, a partnership with companies and researchers, was intended to provide better access to the kind of data needed to design online safety measures to prevent radicalization to violence. In practice, it has much wider ramifications, enabling us to reveal more about the ways in which AI and humans interact.

From its start, the Christchurch Call anticipated the emerging challenges of AI and carved out space to address emerging technologies that threaten to foment violent extremism online. The Christchurch Call is actively tackling these AI issues.

Perhaps the most useful thing the Christchurch Call can add to the AI governance debate is the model itself. It is possible to bring companies, government officials, academics and civil society together not only to build consensus but also to make progress. It’s possible to create tools that address the here and now and also position ourselves to face an unknown future. We need this to deal with AI…(More)”.

Making Sense of Citizens’ Input through Artificial Intelligence: A Review of Methods for Computational Text Analysis to Support the Evaluation of Contributions in Public Participation


Paper by Julia Romberg and Tobias Escher: “Public sector institutions that consult citizens to inform decision-making face the challenge of evaluating the contributions made by citizens. This evaluation has important democratic implications but at the same time, consumes substantial human resources. However, until now the use of artificial intelligence such as computer-supported text analysis has remained an under-studied solution to this problem. We identify three generic tasks in the evaluation process that could benefit from natural language processing (NLP). Based on a systematic literature search in two databases on computational linguistics and digital government, we provide a detailed review of existing methods and their performance. While some promising approaches exist, for instance to group data thematically and to detect arguments and opinions, we show that there remain important challenges before these could offer any reliable support in practice. These include the quality of results, the applicability to non-English language corpuses and making algorithmic models available to practitioners through software. We discuss a number of avenues that future research should pursue that can ultimately lead to solutions for practice. The most promising of these bring in the expertise of human evaluators, for example through active learning approaches or interactive topic modelling…(More)” See also: Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern.

How Indigenous Groups Are Leading the Way on Data Privacy


Article by Rina Diane Caballar: “Even as Indigenous communities find increasingly helpful uses for digital technology, many worry that outside interests could take over their data and profit from it, much like colonial powers plundered their physical homelands. But now some Indigenous groups are reclaiming control by developing their own data protection technologies—work that demonstrates how ordinary people have the power to sidestep the tech companies and data brokers who hold and sell the most intimate details of their identities, lives and cultures.

When governments, academic institutions or other external organizations gather information from Indigenous communities, they can withhold access to it or use it for other purposes without the consent of these communities.

“The threats of data colonialism are real,” says Tahu Kukutai, a professor at New Zealand’s University of Waikato and a founding member of Te Mana Raraunga, the Māori Data Sovereignty Network. “They’re a continuation of old processes of extraction and exploitation of our land—the same is being done to our information.”

To shore up their defenses, some Indigenous groups are developing new privacy-first storage systems that give users control and agency over all aspects of this information: what is collected and by whom, where it’s stored, how it’s used and, crucially, who has access to it.

Storing data in a user’s device—rather than in the cloud or in centralized servers controlled by a tech company—is an essential privacy feature of these technologies. Rudo Kemper is founder of Terrastories, a free and open-source app co-created with Indigenous communities to map their land and share stories about it. He recalls a community in Guyana that was emphatic about having an offline, on-premise installation of the Terrastories app. To members of this group, the issue was more than just the lack of Internet access in the remote region where they live. “To them, the idea of data existing in the cloud is almost like the knowledge is leaving the territory because it’s not physically present,” Kemper says.

Likewise, creators of Our Data Indigenous, a digital survey app designed by academic researchers in collaboration with First Nations communities across Canada, chose to store their database in local servers in the country rather than in the cloud. (Canada has strict regulations on disclosing personal information without prior consent.) In order to access this information on the go, the app’s developers also created a portable backpack kit that acts as a local area network without connections to the broader Internet. The kit includes a laptop, battery pack and router, with data stored on the laptop. This allows users to fill out surveys in remote locations and back up the data immediately without relying on cloud storage…(More)”.

The messy politics of local climate assemblies


Paper by Pancho Lewis,  Jacob Ainscough,  Rachel Coxcoon &  Rebecca Willis: “In recent years, many local authorities in the UK have run local climate assemblies (LCAs) such as citizens’ assemblies or juries, with the goal of developing citizen-led solutions to the climate crisis. In this essay, we argue that a ‘convenient fiction’ often underpins the way local authority actors explain the rationale for running LCAs. This convenient fiction runs as follows: LCAs are commissioned as a response to the climate threat, and local decision-makers work through LCA recommendations to implement appropriate policies in their locality. We suggest that this narrative smooths over and presents as linear a process that is in fact messy and political. LCAs emerge as a result of political pressure and bargaining. Once LCAs have run their course, the extent to which their recommendations are implemented is dependent on power dynamics and institutional capacities. We argue that it is important to surface the messiness and political tensions that underpin the origins and aftermath of local climate assemblies. This achieves three things. First, it helps manage expectations about the impact LCAs are likely to have on the policy process. Second, it broadens understandings of how LCAs can contribute to change. Third, it provides a complex model that actors can use to understand how they can help deliver climate action through politics. We conclude that LCAs are important — if as yet unproven — new interventions in local climate politics, when assessed against this more complex picture…(More)”