Brazil launches participatory national planning process


Article by Tarson Núñez and Luiza Jardim: “At a time when signs of a crisis in democracy are prevalent around the world, the Brazilian government is seeking to expand and deepen the active participation of citizens in its decisions. The new administration of Luiz Inácio Lula da Silva believes that more democracy is needed to rebuild citizens’ trust in political processes. And it just launched one of its main initiatives, the Participatory Pluriannual Plan (PPA Participativo). The PPA sets the goals and objectives for Brazil over the following four years, and Lula is determined to not only allow but facilitate public participation in its development. 

On May 11, the federal government held the first state plenary for the Participatory PPA, an assembly open to all citizens, social movements and civil society organizations. Participants at the state plenaries are able to discuss proposals and deliberate on the government’s public policies. Over the next two months, government officials will travel to the capitals of the country’s 26 states as well as the federal district (the capital of Brazil) to listen to people present their priorities. If they prefer, people can also submit their suggestions through a digital platform (Decidim, accessible only to people in Brazil) or the Interconselhos Forum, which brings together various councils and civil society groups…(More)”.

Will Democracies Stand Up to Big Brother?


Article by Simon Johnson, Daron Acemoglu and Sylvia Barmack: “Rapid advances in AI and AI-enhanced surveillance tools have created an urgent need for international norms and coordination to set sensible standards. But with oppressive authoritarian regimes unlikely to cooperate, the world’s democracies should start preparing to play economic hardball…Fiction writers have long imagined scenarios in which every human action is monitored by some malign centralized authority. But now, despite their warnings, we find ourselves careening toward a dystopian future worthy of George Orwell’s 1984. The task of assessing how to protect our rights – as consumers, workers, and citizens – has never been more urgent.

One sensible proposal is to limit patents on surveillance technologies to discourage their development and overuse. All else being equal, this could tilt the development of AI-related technologies away from surveillance applications – at least in the United States and other advanced economies, where patent protections matter, and where venture capitalists will be reluctant to back companies lacking strong intellectual-property rights. But even if such sensible measures are adopted, the world will remain divided between countries with effective safeguards on surveillance and those without them. We therefore also need to consider the legitimate basis for trade between these emergent blocs.

AI capabilities have leapt forward over the past 18 months, and the pace of further development is unlikely to slow. The public release of ChatGPT in November 2022 was the generative-AI shot heard round the world. But just as important has been the equally rapid increase in governments and corporations’ surveillance capabilities. Since generative AI excels at pattern matching, it has made facial recognition remarkably accurate (though not without some major flaws). And the same general approach can be used to distinguish between “good” and problematic behavior, based simply on how people move or comport themselves.

Such surveillance technically leads to “higher productivity,” in the sense that it augments an authority’s ability to compel people to do what they are supposed to be doing. For a company, this means performing jobs at what management considers to be the highest productivity level. For a government, it means enforcing the law or otherwise ensuring compliance with those in power.

Unfortunately, a millennium of experience has established that increased productivity does not necessarily lead to improvements in shared prosperity. Today’s AI-powered surveillance allows overbearing managers and authoritarian political leaders to enforce their rules more effectively. But while productivity may increase, most people will not benefit…(More)”

There’s a model for governing AI. Here it is.


Article by Jacinda Ardern: “…On March 15, 2019, a terrorist took the lives of 51 members of New Zealand’s Muslim community in Christchurch. The attacker livestreamed his actions for 17 minutes, and the images found their way onto social media feeds all around the planet. Facebook alone blocked or removed 1.5 million copies of the video in the first 24 hours; in that timeframe, YouTube measured one upload per second.

Afterward, New Zealand was faced with a choice: accept that such exploitation of technology was inevitable or resolve to stop it. We chose to take a stand.

We had to move quickly. The world was watching our response and that of social media platforms. Would we regulate in haste? Would the platforms recognize their responsibility to prevent this from happening again?

New Zealand wasn’t the only nation grappling with the connection between violent extremism and technology. We wanted to create a coalition and knew that France had started to work in this space — so I reached out, leader to leader. In my first conversation with President Emmanuel Macron, he agreed there was work to do and said he was keen to join us in crafting a call to action.

We asked industry, civil society and other governments to join us at the table to agree on a set of actions we could all commit to. We could not use existing structures and bureaucracies because they weren’t equipped to deal with this problem.

Within two months of the attack, we launched the Christchurch Call to Action, and today it has more than 120 members, including governments, online service providers and civil society organizations — united by our shared objective to eliminate terrorist and other violent extremist content online and uphold the principle of a free, open and secure internet.

The Christchurch Call is a large-scale collaboration, vastly different from most top-down approaches. Leaders meet annually to confirm priorities and identify areas of focus, allowing the project to act dynamically. And the Call Secretariat — made up of officials from France and New Zealand — convenes working groups and undertakes diplomatic efforts throughout the year. All members are invited to bring their expertise to solve urgent online problems.

While this multi-stakeholder approach isn’t always easy, it has created change. We have bolstered the power of governments and communities to respond to attacks like the one New Zealand experienced. We have created new crisis-response protocols — which enabled companies to stop the 2022 Buffalo attack livestream within two minutes and quickly remove footage from many platforms. Companies and countries have enacted new trust and safety measures to prevent livestreaming of terrorist and other violent extremist content. And we have strengthened the industry-founded Global Internet Forum to Counter Terrorism with dedicated funding, staff and a multi-stakeholder mission.

We’re also taking on some of the more intransigent problems. The Christchurch Call Initiative on Algorithmic Outcomes, a partnership with companies and researchers, was intended to provide better access to the kind of data needed to design online safety measures to prevent radicalization to violence. In practice, it has much wider ramifications, enabling us to reveal more about the ways in which AI and humans interact.

From its start, the Christchurch Call anticipated the emerging challenges of AI and carved out space to address emerging technologies that threaten to foment violent extremism online. The Christchurch Call is actively tackling these AI issues.

Perhaps the most useful thing the Christchurch Call can add to the AI governance debate is the model itself. It is possible to bring companies, government officials, academics and civil society together not only to build consensus but also to make progress. It’s possible to create tools that address the here and now and also position ourselves to face an unknown future. We need this to deal with AI…(More)”.

Making Sense of Citizens’ Input through Artificial Intelligence: A Review of Methods for Computational Text Analysis to Support the Evaluation of Contributions in Public Participation


Paper by Julia Romberg and Tobias Escher: “Public sector institutions that consult citizens to inform decision-making face the challenge of evaluating the contributions made by citizens. This evaluation has important democratic implications but at the same time, consumes substantial human resources. However, until now the use of artificial intelligence such as computer-supported text analysis has remained an under-studied solution to this problem. We identify three generic tasks in the evaluation process that could benefit from natural language processing (NLP). Based on a systematic literature search in two databases on computational linguistics and digital government, we provide a detailed review of existing methods and their performance. While some promising approaches exist, for instance to group data thematically and to detect arguments and opinions, we show that there remain important challenges before these could offer any reliable support in practice. These include the quality of results, the applicability to non-English language corpuses and making algorithmic models available to practitioners through software. We discuss a number of avenues that future research should pursue that can ultimately lead to solutions for practice. The most promising of these bring in the expertise of human evaluators, for example through active learning approaches or interactive topic modelling…(More)” See also: Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern.

How Indigenous Groups Are Leading the Way on Data Privacy


Article by Rina Diane Caballar: “Even as Indigenous communities find increasingly helpful uses for digital technology, many worry that outside interests could take over their data and profit from it, much like colonial powers plundered their physical homelands. But now some Indigenous groups are reclaiming control by developing their own data protection technologies—work that demonstrates how ordinary people have the power to sidestep the tech companies and data brokers who hold and sell the most intimate details of their identities, lives and cultures.

When governments, academic institutions or other external organizations gather information from Indigenous communities, they can withhold access to it or use it for other purposes without the consent of these communities.

“The threats of data colonialism are real,” says Tahu Kukutai, a professor at New Zealand’s University of Waikato and a founding member of Te Mana Raraunga, the Māori Data Sovereignty Network. “They’re a continuation of old processes of extraction and exploitation of our land—the same is being done to our information.”

To shore up their defenses, some Indigenous groups are developing new privacy-first storage systems that give users control and agency over all aspects of this information: what is collected and by whom, where it’s stored, how it’s used and, crucially, who has access to it.

Storing data in a user’s device—rather than in the cloud or in centralized servers controlled by a tech company—is an essential privacy feature of these technologies. Rudo Kemper is founder of Terrastories, a free and open-source app co-created with Indigenous communities to map their land and share stories about it. He recalls a community in Guyana that was emphatic about having an offline, on-premise installation of the Terrastories app. To members of this group, the issue was more than just the lack of Internet access in the remote region where they live. “To them, the idea of data existing in the cloud is almost like the knowledge is leaving the territory because it’s not physically present,” Kemper says.

Likewise, creators of Our Data Indigenous, a digital survey app designed by academic researchers in collaboration with First Nations communities across Canada, chose to store their database in local servers in the country rather than in the cloud. (Canada has strict regulations on disclosing personal information without prior consent.) In order to access this information on the go, the app’s developers also created a portable backpack kit that acts as a local area network without connections to the broader Internet. The kit includes a laptop, battery pack and router, with data stored on the laptop. This allows users to fill out surveys in remote locations and back up the data immediately without relying on cloud storage…(More)”.

The messy politics of local climate assemblies


Paper by Pancho Lewis,  Jacob Ainscough,  Rachel Coxcoon &  Rebecca Willis: “In recent years, many local authorities in the UK have run local climate assemblies (LCAs) such as citizens’ assemblies or juries, with the goal of developing citizen-led solutions to the climate crisis. In this essay, we argue that a ‘convenient fiction’ often underpins the way local authority actors explain the rationale for running LCAs. This convenient fiction runs as follows: LCAs are commissioned as a response to the climate threat, and local decision-makers work through LCA recommendations to implement appropriate policies in their locality. We suggest that this narrative smooths over and presents as linear a process that is in fact messy and political. LCAs emerge as a result of political pressure and bargaining. Once LCAs have run their course, the extent to which their recommendations are implemented is dependent on power dynamics and institutional capacities. We argue that it is important to surface the messiness and political tensions that underpin the origins and aftermath of local climate assemblies. This achieves three things. First, it helps manage expectations about the impact LCAs are likely to have on the policy process. Second, it broadens understandings of how LCAs can contribute to change. Third, it provides a complex model that actors can use to understand how they can help deliver climate action through politics. We conclude that LCAs are important — if as yet unproven — new interventions in local climate politics, when assessed against this more complex picture…(More)”

Digital Freedoms in French-Speaking African Countries


Report by AFD: “As digital penetration increases in countries across the African continent, its citizens face growing risks and challenges. Indeed, beyond facilitated access to knowledge such as the online encyclopedia Wikipedia, to leisure-related tools such as Youtube, and to sociability such as social networks, digital technology offers an unprecedented space for democratic expression. 

However, these online civic spaces are under threat. Several governments have enacted vaguely-defined laws, allowing for random arrests.

Several countries have implemented repressive practices restricting freedom of expression and access to information. This is what is known as “digital authoritarianism”, which is on the rise in many countries.

This report takes stock of digital freedoms in 26 French-speaking African countries, and proposes concrete actions to improve citizen participation and democracy…(More)”

Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better


Book by Jennifer Pahlka: “Just when we most need our government to work—to decarbonize our infrastructure and economy, to help the vulnerable through a pandemic, to defend ourselves against global threats—it is faltering. Government at all levels has limped into the digital age, offering online services that can feel even more cumbersome than the paperwork that preceded them and widening the gap between the policy outcomes we intend and what we get.

But it’s not more money or more tech we need. Government is hamstrung by a rigid, industrial-era culture, in which elites dictate policy from on high, disconnected from and too often disdainful of the details of implementation. Lofty goals morph unrecognizably as they cascade through a complex hierarchy. But there is an approach taking hold that keeps pace with today’s world and reclaims government for the people it is supposed to serve. Jennifer Pahlka shows why we must stop trying to move the government we have today onto new technology and instead consider what it would mean to truly recode American government…(More)”.

The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet



Book by Jeff Jarvis: “The age of print is a grand exception in history. For five centuries it fostered what some call print culture – a worldview shaped by the completeness, permanence, and authority of the printed word. As a technology, print at its birth was as disruptive as the digital migration of today. Now, as the internet ushers us past print culture, journalist Jeff Jarvis offers important lessons from the era we leave behind.

To understand our transition out of the Gutenberg Age, Jarvis first examines the transition into it. Tracking Western industrialized print to its origins, he explores its invention, spread, and evolution, as well as the bureaucracy and censorship that followed. He also reveals how print gave rise to the idea of the mass – mass media, mass market, mass culture, mass politics, and so on – that came to dominate the public sphere.

What can we glean from the captivating, profound, and challenging history of our devotion to print? Could it be that we are returning to a time before mass media, to a society built on conversation, and that we are relearning how to hold that conversation with ourselves? Brimming with broader implications for today’s debates over communication, authorship, and ownership, Jarvis’ exploration of print on a grand scale is also a complex, compelling history of technology and power…(More)”

Shallowfakes


Essay by James R. Ostrowski: “…This dystopian fantasy, we are told, is what the average social media feed looks like today: a war zone of high-tech disinformation operations, vying for your attention, your support, your compliance. Journalist Joseph Bernstein, in his 2021 Harper’s piece “Bad News,” attributes this perception of social media to “Big Disinfo” — a cartel of think tanks, academic institutions, and prestige media outlets that spend their days spilling barrels of ink into op-eds about foreign powers’ newest disinformation tactics. The technology’s specific impact is always vague, yet somehow devastating. Democracy is dying, shot in the chest by artificial intelligence.

The problem with Big Disinfo isn’t that disinformation campaigns aren’t happening but that claims of mind-warping, AI-enabled propaganda go largely unscrutinized and often amount to mere speculation. There is little systematic public information about the scale at which foreign governments use deepfakes, bot armies, or generative text in influence ops. What little we know is gleaned through irregular investigations or leaked documents. In lieu of data, Big Disinfo squints into the fog, crying “Bigfoot!” at every oak tree.

Any machine learning researcher will admit that there is a critical disconnect between what’s possible in the lab and what’s happening in the field. Take deepfakes. When the technology was first developed, public discourse was saturated with proclamations that it would slacken society’s grip on reality. A 2019 New York Times op-ed, indicative of the general sentiment of this time, was titled “Deepfakes Are Coming. We Can No Longer Believe What We See.” That same week, Politico sounded the alarm in its article “‘Nightmarish’: Lawmakers brace for swarm of 2020 deepfakes.” A Forbes article asked us to imagine a deepfake video of President Trump announcing a nuclear weapons launch against North Korea. These stories, like others in the genre, gloss over questions of practicality…(More)”.