Does information about citizen participation initiatives increase political trust?


Paper by Martin Ardanaz,  Susana Otálvaro-Ramírez, and Carlos Scartascini: “Participatory programs can reduce the informational and power asymmetries that engender mistrust. These programs, however, cannot include every citizen. Hence, it is important to evaluate if providing information about those programs could affect trust among those who do not participate. We assess the effect of an informational campaign about these programs in the context of a survey experiment conducted in the city of Buenos Aires, Argentina. Results show that providing detailed information about citizen involvement and outputs of a participatory budget initiative marginally shapes voters’ assessments of government performance and political trust. In particular, it increases voters’ perceptions about the benevolence and honesty of the government. Effects are larger for individuals with ex ante more negative views about the local government’s quality and they differ according to the respondents’ interpersonal trust and their beliefs about the ability of their communities to solve the type of collective-action problems that the program seeks to address. This article complements the literature that has examined the effects of participatory interventions on trust, and the literature that evaluates the role of information. The results in the article suggest that participatory budget programs could directly affect budget allocations and trust for those who participate, and those that are well-disseminated could also affect trust in the broader population. Because mistrustful individuals tend to shy away from demanding the government public goods that increase overall welfare, well-disseminated participatory budget programs could affect budget allocations directly and through their effect on trust…(More)”.

Revolutionizing Governance: AI-Driven Citizen Engagement


Article by Komal Goyal: “Government-citizen engagement has come a long way over the past decade, with governments increasingly adopting AI-powered analytics, automated processes and chatbots to engage with citizens and gain insights into their concerns. A 2023 Stanford University report found that the federal government spent $3.3 billion on AI in the fiscal year 2022, highlighting the remarkable upswing in AI adoption across various government sectors.

As the demands of a digitally empowered and information-savvy society constantly evolve, it is becoming imperative for government agencies to revolutionize how they interact with their constituents. I’ll discuss how AI can help achieve this and pave the way for a more responsive, inclusive and effective form of governance…(More)”.

Future-Proofing Transparency: Re-Thinking Public Record Governance For the Age of Big Data


Paper by Beatriz Botero Arcila: “Public records, public deeds, and even open data portals often include personal information that can now be easily accessed online. Yet, for all the recent attention given to informational privacy and data protection, scant literature exists on the governance of personal information that is available in public documents. This Article examines the critical issue of balancing privacy and transparency within public record governance in the age of Big Data.

With Big Data and powerful machine learning algorithms, personal information in public records can easily be used to infer sensitive data about people or aggregated to create a comprehensive personal profile of almost anyone. This information is public and open, however, for many good reasons: ensuring political accountability, facilitating democratic participation, enabling economic transactions, combating illegal activities such as money laundering and terrorism financing, and facilitating. Can the interest in record publicity coexist with the growing ease of deanonymizing and revealing sensitive information about individuals?

This Article addresses this question from a comparative perspective, focusing on US and EU access to information law. The Article shows that the publicity of records was, in the past and not withstanding its presumptive public nature, protected because most people would not trouble themselves to go to public offices to review them, and it was practical impossible to aggregate them to draw extensive profiles about people. Drawing from this insight and contemporary debates on data governance, this Article challenges the binary classification of data as either published or not and proposes a risk-based framework that re-insert that natural friction to public record governance by leveraging techno-legal methods in how information is published and accessed…(More)”.

Why China Can’t Export Its Model of Surveillance


Article by Minxin Pei: “t’s Not the Tech That Empowers Big Brother in Beijing—It’s the Informants…Over the past two decades, Chinese leaders have built a high-tech surveillance system of seemingly extraordinary sophistication. Facial recognition software, Internet monitoring, and ubiquitous video cameras give the impression that the ruling Chinese Communist Party (CCP) has finally accomplished the dictator’s dream of building a surveillance state like the one imagined in George Orwell’s 1984

A high-tech surveillance network now blankets the entire country, and the potency of this system was on full display in November 2022, when nationwide protests against China’s COVID lockdown shocked the party. Although the protesters were careful to conceal their faces with masks and hats, the police used mobile-phone location data to track them down. Mass arrests followed.

Beijing’s surveillance state is not only a technological feat. It also relies on a highly labor-intensive organization. Over the past eight decades, the CCP has constructed a vast network of millions of informers and spies whose often unpaid work has been critical to the regime’s survival. It is these men and women, more than cameras or artificial intelligence, that have allowed Beijing to suppress dissent. Without a network of this size, the system could not function. This means that, despite the party’s best efforts, the Chinese security apparatus is impossible to export…(More)”.

Governable Spaces: Democratic Design for Online Life


Book by Nathan Schneider: “When was the last time you participated in an election for a Facebook group or sat on a jury for a dispute in a subreddit? Platforms nudge users to tolerate nearly all-powerful admins, moderators, and “benevolent dictators for life.” In Governable Spaces, Nathan Schneider argues that the internet has been plagued by a phenomenon he calls “implicit feudalism”: a bias, both cultural and technical, for building communities as fiefdoms. The consequences of this arrangement matter far beyond online spaces themselves, as feudal defaults train us to give up on our communities’ democratic potential, inclining us to be more tolerant of autocratic tech CEOs and authoritarian tendencies among politicians. But online spaces could be sites of a creative, radical, and democratic renaissance. Using media archaeology, political theory, and participant observation, Schneider shows how the internet can learn from governance legacies of the past to become a more democratic medium, responsive and inventive unlike anything that has come before…(More)”.

Winning the Battle of Ideas: Exposing Global Authoritarian Narratives and Revitalizing Democratic Principles


Report by Joseph Siegle: “Democracies are engaged in an ideological competition with autocracies that could reshape the global order. Narratives are a potent, asymmetric instrument of power, as they reframe events in a way that conforms to and propagates a particular worldview. Over the past decade and a half, autocracies like Russia and China have led the effort to disseminate authoritarian narratives globally, seeking to normalize authoritarianism as an equally viable and legitimate form of government. How do authoritarian narratives reframe an unappealing value proposition, with the aim of making the democratic path seem less attractive and offering authoritarianism as an alternative model? How can democracies reemphasize their core principles and remind audiences of democracy’s moral, developmental, and security advantages?…(More)”.

In the long run: the future as a political idea


Book by Jonathan White: “Democracy is future-oriented and self-correcting: today’s problems can be solved, we are told, in tomorrow’s elections. But the biggest issues facing the modern world – from climate collapse and pandemics to recession and world war – each apparently bring us to the edge of the irreversible. What happens to democracy when the future seems no longer open?

In this eye-opening history of ideas, Jonathan White investigates how politics has long been directed by shifting visions of the future, from the birth of ideologies in the nineteenth century to Cold War secrecy and the excesses of the neoliberal age.

As an inescapable sense of disaster defines our politics, White argues that a political commitment to the long-term may be the best way to safeguard democracy. Wide in scope and sharply observed, In the Long Run is a history of the future that urges us to make tomorrow new again…(More)”.

Guardrails: Guiding Human Decisions in the Age of AI


Book by Urs Gasser and Viktor Mayer-Schönberger: “When we make decisions, our thinking is informed by societal norms, “guardrails” that guide our decisions, like the laws and rules that govern us. But what are good guardrails in today’s world of overwhelming information flows and increasingly powerful technologies, such as artificial intelligence? Based on the latest insights from the cognitive sciences, economics, and public policy, Guardrails offers a novel approach to shaping decisions by embracing human agency in its social context.

In this visionary book, Urs Gasser and Viktor Mayer-Schönberger show how the quick embrace of technological solutions can lead to results we don’t always want, and they explain how society itself can provide guardrails more suited to the digital age, ones that empower individual choice while accounting for the social good, encourage flexibility in the face of changing circumstances, and ultimately help us to make better decisions as we tackle the most daunting problems of our times, such as global injustice and climate change.

Whether we change jobs, buy a house, or quit smoking, thousands of decisions large and small shape our daily lives. Decisions drive our economies, seal the fate of democracies, create war or peace, and affect the well-being of our planet. Guardrails challenges the notion that technology should step in where our own decision making fails, laying out a surprisingly human-centered set of principles that can create new spaces for better decisions and a more equitable and prosperous society…(More)”.

Regulating AI Deepfakes and Synthetic Media in the Political Arena


Report by Daniel Weiner and Lawrence Norden: “…Part I of this resource defines the terms deepfakesynthetic media, and manipulated media in more detail. Part II sets forth some necessary considerations for policymakers, specifically:

  • The most plausible rationales for regulating deepfakes and other manipulated media when used in the political arena. In general, the necessity of promoting an informed electorate and the need to safeguard the overall integrity of the electoral process are among the most compelling rationales for regulating manipulated media in the political space.
  • The types of communications that should be regulated. Regulations should reach synthetic images and audio as well as video. Policymakers should focus on curbing or otherwise limiting depictions of events or statements that did not actually occur, especially those appearing in paid campaign ads and certain other categories of paid advertising or otherwise widely disseminated communications. All new rules should have clear carve-outs for parody, news media stories, and potentially other types of protected speech.
  • How such media should be regulated. Transparency rules — for example, rules requiring a manipulated image or audio recording to be clearly labeled as artificial and not a portrayal of real events — will usually be easiest to defend in court. Transparency will not always be enough, however; lawmakers should also consider outright bans of certain categories of manipulated media, such as deceptive audio and visual material seeking to mislead people about the time, place, and manner of voting.
  • Who regulations should target. Both bans and less burdensome transparency requirements should primarily target those who create or disseminate deceptive media, although regulation of the platforms used to transmit deepfakes may also make sense…(More)”.

2023 OECD Digital Government Index


OECD Report: “Digital government is essential to transform government processes and services in ways that improve the responsiveness and reliability of the public sector. During the COVID-19 pandemic it also proved crucial to governments’ ability to continue operating in times of crisis and provide timely services to citizens and businesses. Yet, for the digital transformation to be sustainable in the long term, it needs solid foundations, including adaptable governance arrangements, reliable and resilient digital public infrastructure, and a prospective approach to governing with emerging technologies such as artificial intelligence. This paper presents the main findings of the 2023 edition of the OECD Digital Government Index (DGI), which benchmarks the efforts made by governments to establish the foundations necessary for a coherent, human-centred digital transformation of the public sector. It comprises 155 data points from 33 member countries, 4 accession countries and 1 partner country collected in 2022, covering the period between 01 January 2020 and 31 October 2022…(More)”