Despite Its Problems, Network Technology Can Help Renew Democracy


Essay by Daniel Araya: “The impact of digital technologies on contemporary economic and social development has been nothing short of revolutionary. The rise of the internet has transformed the way we share content, buy and sell goods, and manage our institutions. But while the hope of the internet has been its capacity to expand human connection and bring people together, the reality has often been something else entirely.

When social media networks first emerged about a decade ago, they were hailed as “technologies of liberation” with the capacity to spread democracy. While these social networks have undeniably democratized access to information, they have also helped to stimulate social and political fragmentation, eroding the discursive fibres that hold democracies together.

Prior to the internet, news and media were the domain of professional journalists, overseen by powerful experts, and shaped by gatekeepers. However, in the age of the internet, platforms circumvent the need for gatekeepers altogether. Bypassing the centralized distribution channels that have served as a foundation to mass industrial societies, social networks have begun reshaping the way democratic societies build consensus. Given the importance of discourse to democratic self-government, concern is growing that democracy is failing…(More)”.

Parliament Buildings: The Architecture of Politics in Europe


Book edited by Sophia Psarra, Uta Staiger, and Claudia Sternberg: “As political polarisation undermines confidence in the shared values and established constitutional orders of many nations, it is imperative that we explore how parliaments are to stay relevant and accessible to the citizens whom they serve. The rise of modern democracies is thought to have found physical expression in the staged unity of the parliamentary seating plan. However, the built forms alone cannot give sufficient testimony to the exercise of power in political life.

Parliament Buildings brings together architecture, history, art history, history of political thought, sociology, behavioural psychology, anthropology and political science to raise a host of challenging questions. How do parliament buildings give physical form to norms and practices, to behaviours, rituals, identities and imaginaries? How are their spatial forms influenced by the political cultures they accommodate? What kinds of histories, politics and morphologies do the diverse European parliaments share, and how do their political trajectories intersect?

This volume offers an eclectic exploration of the complex nexus between architecture and politics in Europe. Including contributions from architects who have designed or remodelled four parliament buildings in Europe, it provides the first comparative, multi-disciplinary study of parliament buildings across Europe and across history…(More)”

Unlocking the Potential: The Call for an International Decade of Data


Working Paper by Stefaan Verhulst : “The goal of this working paper is to reiterate the central importance of data – to Artificial Intelligence (AI) in particular, but more generally to the landscape of digital technology.

What follows serves as a clarion call to the global community to prioritize and advance data as the bedrock for social and economic development, especially for the UN’s Sustainable Development Goals. It begins by recognizing the existence of significant remaining challenges related to data; encompassing issues of accessibility, distribution, divides, and asymmetries. In light of these challenges, and as we propel ourselves into an era increasingly dominated by AI and AI-related innovation, the paper argues that establishing a more robust foundation for the stewardship of data is critical; a foundation that, for instance, embodies inclusivity, self-determination, and responsibility.

Finally, the paper advocates for the creation of an International Decade of Data (IDD), an initiative aimed at solidifying this foundation globally and advancing our collective efforts towards data-driven progress.

Download ‘Unlocking the Potential: The Call for an International Decade of Data’ here

Democratic Policy Development using Collective Dialogues and AI


Paper by Andrew Konya, Lisa Schirch, Colin Irwin, Aviv Ovadya: “We design and test an efficient democratic process for developing policies that reflect informed public will. The process combines AI-enabled collective dialogues that make deliberation democratically viable at scale with bridging-based ranking for automated consensus discovery. A GPT4-powered pipeline translates points of consensus into representative policy clauses from which an initial policy is assembled. The initial policy is iteratively refined with the input of experts and the public before a final vote and evaluation. We test the process three times with the US public, developing policy guidelines for AI assistants related to medical advice, vaccine information, and wars & conflicts. We show the process can be run in two weeks with 1500+ participants for around $10,000, and that it generates policy guidelines with strong public support across demographic divides. We measure 75-81% support for the policy guidelines overall, and no less than 70-75% support across demographic splits spanning age, gender, religion, race, education, and political party. Overall, this work demonstrates an end-to-end proof of concept for a process we believe can help AI labs develop common-ground policies, governing bodies break political gridlock, and diplomats accelerate peace deals…(More)”.

Matchmaking Research To Policy: Introducing Britain’s Areas Of Research Interest Database


Article by Kathryn Oliver: “Areas of research interest (ARIs) were originally recommended in the 2015 Nurse Review, which argued that if government stated what it needed to know more clearly and more regularly, then it would be easier for policy-relevant research to be produced.

During our time in government, myself and Annette Boaz worked to develop these areas of research interest, mobilize experts and produce evidence syntheses and other outputs addressing them, largely in response to the COVID pandemic. As readers of this blog will know, we have learned a lot about what it takes to mobilize evidence – the hard, and often hidden labor of creating and sustaining relationships, being part of transient teams, managing group dynamics, and honing listening and diplomatic skills.

Some of the challenges we encountered include the oft-cited, cultural gap between research and policy, the relevance of evidence, and the difficulty in resourcing knowledge mobilization and evidence synthesis require systemic responses. However, one challenge, the information gap noted by Nurse, between researchers and what government departments actually want to know offered a simpler solution.

Up until September 2023, departmental ARIs were published on gov.uk, in pdf or html format. Although a good start, we felt that having all the ARIs in one searchable database would make them more interactive and accessible. So, working with Overton, we developed the new ARI database. The primary benefit of the database will be to raise awareness of ARIs (through email alerts about new ARIs) and accessibility (by holding all ARIs in one place which is easily searchable)…(More)”.

Assessing and Suing an Algorithm


Report by Elina Treyger, Jirka Taylor, Daniel Kim, and Maynard A. Holliday: “Artificial intelligence algorithms are permeating nearly every domain of human activity, including processes that make decisions about interests central to individual welfare and well-being. How do public perceptions of algorithmic decisionmaking in these domains compare with perceptions of traditional human decisionmaking? What kinds of judgments about the shortcomings of algorithmic decisionmaking processes underlie these perceptions? Will individuals be willing to hold algorithms accountable through legal channels for unfair, incorrect, or otherwise problematic decisions?

Answers to these questions matter at several levels. In a democratic society, a degree of public acceptance is needed for algorithms to become successfully integrated into decisionmaking processes. And public perceptions will shape how the harms and wrongs caused by algorithmic decisionmaking are handled. This report shares the results of a survey experiment designed to contribute to researchers’ understanding of how U.S. public perceptions are evolving in these respects in one high-stakes setting: decisions related to employment and unemployment…(More)”.

Can Large Language Models Capture Public Opinion about Global Warming? An Empirical Assessment of Algorithmic Fidelity and Bias


Paper by S. Lee et all: “Large language models (LLMs) have demonstrated their potential in social science research by emulating human perceptions and behaviors, a concept referred to as algorithmic fidelity. This study assesses the algorithmic fidelity and bias of LLMs by utilizing two nationally representative climate change surveys. The LLMs were conditioned on demographics and/or psychological covariates to simulate survey responses. The findings indicate that LLMs can effectively capture presidential voting behaviors but encounter challenges in accurately representing global warming perspectives when relevant covariates are not included. GPT-4 exhibits improved performance when conditioned on both demographics and covariates. However, disparities emerge in LLM estimations of the views of certain groups, with LLMs tending to underestimate worry about global warming among Black Americans. While highlighting the potential of LLMs to aid social science research, these results underscore the importance of meticulous conditioning, model selection, survey question format, and bias assessment when employing LLMs for survey simulation. Further investigation into prompt engineering and algorithm auditing is essential to harness the power of LLMs while addressing their inherent limitations…(More)”.

Unintended Consequences of Data-driven public participation: How Low-Traffic Neighborhood planning became polarized


Paper by Alison Powell: “This paper examines how data-driven consultation contributes to dynamics of political polarization, using the case of ‘Low-Traffic Neighborhoods’ in London, UK. It explores how data-driven consultation can facilitate participation, including ‘agonistic data practices” (Crooks and Currie, 2022) that challenge the dominant interpretations of digital data. The paper adds empirical detail to previous studies of agonistic data practices, concluding that agonistic data practices require certain normative conditions to be met, otherwise dissenting data practices can contribute to dynamics of polarization. The results of this paper draw on empirical insights from the political context of the UK to explain how ostensibly democratic processes including data-driven consultation establish some kinds of knowledge as more legitimate than others. Apparently ‘objective’ knowledge, or calculable data, is attributed greater legitimacy than strong feelings or affective narratives. This can displace affective responses to policy decisions into insular social media spaces where polarizing dynamics are at play. Affective polarization, where political difference is solidified through appeals to feeling, creates political distance and the dehumanization of ‘others’. This can help to amplify conspiracy theories that pose risks to democracy and to the overall legitimacy of media environments. These tendencies are exacerbated when processes of consultation prescribe narrow or specific contributions, valorize quantifiable or objective data and create limited room for dissent…(More)”

AI and Democracy’s Digital Identity Crisis


Essay by Shrey Jain, Connor Spelliscy, Samuel Vance-Law and Scott Moore: “AI-enabled tools have become sophisticated enough to allow a small number of individuals to run disinformation campaigns of an unprecedented scale. Privacy-preserving identity attestations can drastically reduce instances of impersonation and make disinformation easy to identify and potentially hinder. By understanding how identity attestations are positioned across the spectrum of decentralization, we can gain a better understanding of the costs and benefits of various attestations. In this paper, we discuss attestation types, including governmental, biometric, federated, and web of trust-based, and include examples such as e-Estonia, China’s social credit system, Worldcoin, OAuth, X (formerly Twitter), Gitcoin Passport, and EAS. We believe that the most resilient systems create an identity that evolves and is connected to a network of similarly evolving identities that verify one another. In this type of system, each entity contributes its respective credibility to the attestation process, creating a larger, more comprehensive set of attestations. We believe these systems could be the best approach to authenticating identity and protecting against some of the threats to democracy that AI can pose in the hands of malicious actors. However, governments will likely attempt to mitigate these risks by implementing centralized identity authentication systems; these centralized systems could themselves pose risks to the democratic processes they are built to defend. We therefore recommend that policymakers support the development of standards-setting organizations for identity, provide legal clarity for builders of decentralized tooling, and fund research critical to effective identity authentication systems…(More)”

AI in public services will require empathy, accountability


Article by Yogesh Hirdaramani: “The Australian Government Department of the Prime Minister and Cabinet has released the first of its Long Term Insights Briefing, which focuses on how the Government can integrate artificial intelligence (AI) into public services while maintaining the trustworthiness of public service delivery.

Public servants need to remain accountable and transparent with their use of AI, continue to demonstrate empathy for the people they serve, use AI to better meet people’s needs, and build AI literacy amongst the Australian public, the report stated.

The report also cited a forthcoming study that found that Australian residents with a deeper understanding of AI are more likely to trust the Government’s use of AI in service delivery. However,more than half of survey respondents reported having little knowledge of AI.

Key takeaways

The report aims to supplement current policy work on how AI can be best governed in the public service to realise its benefits while maintaining public trust.

In the longer term, the Australian Government aims to use AI to deliver personalised services to its citizens, deliver services more efficiently and conveniently, and achieve a higher standard of care for its ageing population.

AI can help public servants achieve these goals through automating processes, improving service processing and response time, and providing AI-enabled interfaces which users can engage with, such as chatbots and virtual assistants.

However, AI can also lead to unfair or unintended outcomes due to bias in training data or hallucinations, the report noted.

According to the report, the trustworthy use of AI will require public servants to:

  1. Demonstrate integrity by remaining accountable for AI outcomes and transparent about AI use
  2. Demonstrate empathy by offering face-to-face services for those with greater vulnerabilities 
  3. Use AI in ways that improve service delivery for end-users
  4. Build internal skills and systems to implement AI, while educating the public on the impact of AI

The Australian Taxation Office currently uses AI to identify high-risk business activity statements to determine whether refunds can be issued or if further review is required, noted the report. Taxpayers can appeal the decision if staff decide to deny refunds…(More)”