Crowdsourced Politics


Book by Ariadne Vromen, Darren Halpin, Michael Vaughan: “This book focuses on online petitioning and crowdfunding platforms to demonstrate the everyday impact that digital communications have had on contemporary citizen participation. It argues that crowdsourced participation has become normalised and institutionalised into the repertoires of citizens and their organisations. 

To illustrate their arguments the authors use an original survey on acts of political engagement, undertaken with Australian citizens. Through detailed interviews and online analysis they show how advocacy organisations now use online petitions for strategic interventions and mobilisation. They also analyse the policy issues that mobilise citizens on crowdsourcing platforms, including a unique dataset of 17,000 petitions from the popular non-government platform, Change.org. Contrasting mass public concerns with the policy agenda of the government of the day shows there is a disjuncture and lack of responsiveness to crowdsourced citizen expression. Ultimately the book explores the long-term implications of citizen-led change for democracy. ..(More)”.

Top-Down and Bottom-Up Solutions to the Problem of Political Ignorance


Chapter by Hana Samaržija and Quassim Cassam: “There is broad, though not universal, agreement that widespread voter ignorance and irrational evaluation of evidence are serious threats to democracy. But there is deep disagreement over strategies for mitigating the danger. ‘Top-down’ approaches, such as epistocracy and lodging more authority in the hands of experts, seek to mitigate ignorance by concentrating more political power in the hands of the more knowledgeable segments of the population. By contrast, ‘bottom-up’ approaches seek to either raise the political competence of the general public or empower ordinary people in ways that give them better incentives to make good decisions than conventional ballot-box voting does. Examples of bottom-up strategies include increasing voter knowledge through education, various ‘sortition’ proposals, and also shifting more decisions to institutions where citizens can ‘vote with their feet’.

This chapter surveys and critiques a range of both top-down and bottom-up strategies. I conclude that top-down strategies have systematic flaws that severely limit their potential. While they should not be categorically rejected, we should be wary of adopting them on a large scale. Bottom-up strategies have significant limitations of their own. But expanding foot voting opportunities holds more promise than any other currently available option. The idea of paying voters to increase their knowledge also deserves serious consideration…(More)”.

“Co-construction” in deliberative democracy: lessons from the French Citizens’ Convention for Climate


Paper by Louis-Gaëtan Giraudet et al: “Launched in 2019, the French Citizens’ Convention for Climate (CCC) tasked 150 randomly chosen citizens with proposing fair and effective measures to fight climate change. This was to be fulfilled through an “innovative co-construction procedure”, involving some unspecified external input alongside that from the citizens. Did inputs from the steering bodies undermine the citizens’ accountability for the output? Did co-construction help the output resonate with the general public, as is expected from a citizens’ assembly? To answer these questions, we build on our unique experience in observing the CCC proceedings and documenting them with qualitative and quantitative data. We find that the steering bodies’ input, albeit significant, did not impair the citizens’ agency, creativity, and freedom of choice. While succeeding in creating consensus among the citizens who were involved, this co-constructive approach, however, failed to generate significant support among the broader public. These results call for a strengthening of the commitment structure that determines how follow-up on the proposals from a citizens’ assembly should be conducted…(More)”.

Facebook-owner Meta to share more political ad targeting data


Article by Elizabeth Culliford: “Facebook owner Meta Platforms Inc (FB.O) will share more data on targeting choices made by advertisers running political and social-issue ads in its public ad database, it said on Monday.

Meta said it would also include detailed targeting information for these individual ads in its “Facebook Open Research and Transparency” database used by academic researchers, in an expansion of a pilot launched last year.

“Instead of analyzing how an ad was delivered by Facebook, it’s really going and looking at an advertiser strategy for what they were trying to do,” said Jeff King, Meta’s vice president of business integrity, in a phone interview.

The social media giant has faced pressure in recent years to provide transparency around targeted advertising on its platforms, particularly around elections. In 2018, it launched a public ad library, though some researchers criticized it for glitches and a lack of detailed targeting data.Meta said the ad library will soon show a summary of targeting information for social issue, electoral or political ads run by a page….The company has run various programs with external researchers as part of its transparency efforts. Last year, it said a technical error meant flawed data had been provided to academics in its “Social Science One” project…(More)”.

The Political Philosophy of AI: An Introduction


Book by Mark Coeckelbergh: “Political issues people care about such as racism, climate change, and democracy take on new urgency and meaning in the light of technological developments such as AI. How can we talk about the politics of AI while moving beyond mere warnings and easy accusations?

This is the first accessible introduction to the political challenges related to AI. Using political philosophy as a unique lens through which to explore key debates in the area, the book shows how various political issues are already impacted by emerging AI technologies: from justice and discrimination to democracy and surveillance. Revealing the inherently political nature of technology, it offers a rich conceptual toolbox that can guide efforts to deal with the challenges raised by what turns out to be not only artificial intelligence but also artificial power.

This timely and original book will appeal to students and scholars in philosophy of technology and political philosophy, as well as tech developers, innovation leaders, policy makers, and anyone interested in the impact of technology on society…(More)”.

Updated Selected Readings on Inaccurate Data, Half-Truths, Disinformation, and Mob Violence


By Fiona Cece, Uma Kalkar, Stefaan Verhulst, and Andrew J. Zahuranec

As part of an ongoing effort to contribute to current topics in data, technology, and governance, The GovLab’s Selected Readings series provides an annotated and curated collection of recommended works on themes such as open data, data collaboration, and civic technology.

In this edition, we reflect on the one-year anniversary of the January 6, 2021 Capitol Hill Insurrection and its implications of disinformation and data misuse to support malicious objectives. This selected reading builds on the previous edition, published last year, on misinformation’s effect on violence and riots. Readings are listed in alphabetical order. New additions are highlighted in green. 

The mob attack on the US Congress was alarming and the result of various efforts to undermine the trust in and legitimacy of longstanding democratic processes and institutions. The use of inaccurate data, half-truths, and disinformation to spread hate and division is considered a key driver behind last year’s attack. Altering data to support conspiracy theories or challenging and undermining the credibility of trusted data sources to allow for alternative narratives to flourish, if left unchallenged, has consequences — including the increased acceptance and use of violence both offline and online.

The January 6th insurrection was unfortunately not a unique event, nor was it contained to the United States. While efforts to bring perpetrators of the attack to justice have been fruitful, much work remains to be done to address the willful dissemination of disinformation online. Below, we provide a curation of findings and readings that illustrate the global danger of inaccurate data, half-truths, and disinformation. As well, The GovLab, in partnership with the OECD, has explored data-actionable questions around how disinformation can spread across and affect society, and ways to mitigate it. Learn more at disinformation.the100questions.org.

To suggest additional readings on this or any other topic, please email info@thelivinglib.org. All our Selected Readings can be found here.

Readings and Annotations

Al-Zaman, Md. Sayeed. “Digital Disinformation and Communalism in Bangladesh.” China Media Research 15, no. 2 (2019): 68–76.

  • Md. Sayeed Al-Zaman, Lecturer at Jahangirnagar University in Bangladesh, discusses how the country’s increasing number of “netizens” are being manipulated by online disinformation and inciting violence along religious lines. Social media helps quickly spread Anti-Hindu and Buddhist rhetoric, inflaming religious divisions between these groups and Bangladesh’s Muslim majority, impeding possibilities for “peaceful coexistence.”
  • Swaths of online information make it difficult to fact-check, and alluring stories that feed on people’s fear and anxieties are highly likely to be disseminated, leading to a spread of rumors across Bangladesh. Moreover, disruptors and politicians wield religion to target citizens’ emotionality and create violence.
  • Al-Zaman recounts two instances of digital disinformation and communalism. First, in 2016, following a Facebook post supposedly criticizing Islam, riots destroyed 17 templates and 100 houses in Nasrinagar and led to protests in neighboring villages. While the exact source of the disinformation post was never confirmed, a man was beaten and jailed for it despite robust evidence of his wrongdoing. Second, in 2012, after a Facebook post circulated an image of someone desecrating the Quran tagged a Buddhist youth in the picture, 12 Buddhist monasteries and 100 houses in Ramu were destroyed. Through social media, a mob of over 6,000 people, including local Muslim community leaders, attacked the town of Ramu. Later investigation found that the image had been doctored and spread by an Islamic extremist group member in a coordinated attack, manipulating Islamic religious sentiment via fake news to target Buddhist minorities.

Banaji, Shakuntala, and Ram Bhat. “WhatsApp Vigilantes: An exploration of citizen reception and circulation of WhatsApp misinformation linked to mob violence in India.” London School of Economics and Political Science, 2019.

  • London School of Economics and Political Science Associate Professor Shakuntala Banaji and Researcher Ram Bhat articulate how discriminated groups (Dalits, Muslims, Christians, and Adivasis) have been targeted by peer-to-peer communications spreading allegations of bovine related issues, child-snatching, and organ harvesting, culminating in violence against these groups with fatal consequences.
  • WhatsApp messages work in tandem with ideas, tropes, messages, and stereotypes already in the public domain, providing “verification” of fake news.
  • WhatsApp use is gendered, and users are predisposed to believe misinformation and spread misinformation, particularly if it targets a discriminated group that they already have negative and discriminatory feelings towards.
  • Among most WhatsApp users, civic trust is based on ideological, family, and community ties.
  • Restricting sharing, tracking, and reporting of misinformation using “beacon” features and imposing penalties on groups can serve to mitigate the harmful effects of fake news.

Funke, Daniel, and Susan Benkelman. “Misinformation is inciting violence around the world. And tech platforms don’t seem to have a plan to stop it.” Poynter, April 4, 2019.

  • Misinformation leading to violence has been on the rise worldwide. PolitiFact writer Daniel Funke and Susan Benkelman, former Director of Accountability Journalism at the American Press Institute, point to mob violence against Romas in France after rumors of kidnapping attempts circulated on Facebook and Snapchat; the immolation of two men in Puebla, Mexico following fake news spread on Whatsapp of a gang of organ harvesters on the prowl; and false kidnapping claims sent through Whatsapp fueling lynch mobs in India.
  • Slow (re)action to fake news allows mis/disinformation to prey on vulnerable people and infiltrate society. Examples covered in the article discuss how fake news preys on older Americans who lack strong digital literacy. Virulent online rumors have made it difficult for citizens to separate fact from fiction during the Indian general election. Foreign adversaries like Russia are bribing Facebook users for their accounts in order to spread false political news in Ukraine.
  • The article notes that increases in violence caused by disinformation are doubly enabled by “a lack of proper law enforcement” and inaction by technology companies. Facebook, Youtube, and Whatsapp have no coordinated, comprehensive plans to fight fake news and attempt to shift responsibility to “fact-checking partners.” Troublingly, it appears that some platforms deliberately delay the removal of mis/disinformation to attract more engagement. Only once facing intense pressure from policymakers does it seem that these companies remove misleading information.

Kyaw, Nyi Nyi. “Facebooking in Myanmar: From Hate Speech to Fake News to Partisan Political Communication.” ISEAS — Yusof Ishak Institute, no. 36 (2019): 1–10.

  • In the past decade, the number of plugged-in Myanmar citizens has skyrocketed to 39% of the population. All of these 21 million internet users are active on Facebook, where much political rhetoric occurs. Widespread fake news disseminated through Facebook has led to an increase in anti-Muslim sentiment and the spread of misleading, inflammatory headlines.
  • Attempts to curtail fake news on Facebook are difficult. In Myanmar, a developing country where “the rule of law is weak,” monitoring and regulation on social media is not easily enforceable. Criticism from Myanmar and international governments and civil society organizations resulted in Facebook banning and suspending fake news accounts and pages and employing stricter, more invasive monitoring of citizen Facebook use — usually without their knowledge. However, despite Facebook’s key role in agitating and spreading fake news, no political or oversight bodies have “explicitly held the company accountable.”
  • Nyi Nyi Kyaw, Visiting Fellow at the Yusof Ishak Institute in Singapore, notes a cyber law initiative set in motion by the Myanmar government to strengthen social media monitoring methods but is wary of Myanmar’s “human and technological capacity” to enforce these regulations.

Lewandowsky, Stephan, & Sander van der Linden. “Countering Misinformation and Fake News Through Inoculation and Prebunking.” European Review of Social Psychology 32, no. 2, (2020): 348-384.

  • Researchers Stephan Lewandowsky and Sander van der Linden present a scan of conventional instances and tools to combat misinformation. They note the staying power and spread of sensational sound bites, especially in the political arena, and their real-life consequences on problems such as anti-vaccination campaigns, ethnically-charged violence in Myanmar, and mob lynchings in India spurred by Whatsapp rumors.
  • To proactively stop misinformation, the authors introduce the psychological theory of “inoculation,” which forewarns people that they have been exposed to misinformation and alerts them to the ways by which they could be misled to make them more resilient to false information. The paper highlights numerous successes of inoculation in combating misinformation and presents it as a strategy to prevent disinformation-fueled violence.
  • The authors then discuss best strategies to deploy fake news inoculation and generate “herd” cognitive immunity in the face of microtargeting and filter bubbles online.

Osmundsen, Mathias, Alexander Bor, Peter Bjerregaard Vahlstrup, Anja Bechmann, and Michael Bang Petersen. “Partisan polarization is the primary psychological motivation behind “fake news” sharing on Twitter.” American Political Science Review, 115, no.3, (2020): 999-1015.

  • Mathias Osmundsen and colleagues explore the proliferation of fake news on digital platforms. Are those who share fake news “ignorant and lazy,” malicious actors, or playing political games online? Through a psychological mapping of over 2,000 Twitter users across 500,000 stories, the authors find that disruption and polarization fuel fake news dissemination more so than ignorance.
  • Given the increasingly polarized American landscape, spreading fake news can help spread “partisan feelings,” increase interparty social and political cohesion, and call supporters to incideniary and violent action. Thus, misinformation prioritizes usefulness to reach end goals over accuracy and veracity of information.
  • Overall, the authors find that those with low political awareness and media literacy are the least likely to share fake news. While older individuals were more likely to share fake news, the inability to identify real versus fake information was not a major contributor of motivating the spread of misinformation. 
  • For the most part, those who share fake news are knowledgeable about the political sphere and online spaces. They are primarily motivated to ‘troll’ or create online disruption, or to further their partisan stance. In the United States, right-leaning individuals are more likely to follow fake news because they “must turn to more extreme news sources” to find information aligned with their politics, while left-leaning people can find more credible sources from liberal and centrist outlets.

Piazza, James A. “Fake news: the effects of social media disinformation on domestic terrorism.” Dynamics of Asymmetric Conflict (2021): 1-23.

  • James A. Piazza of Pennsylvania State University examines the role of online misinformation in driving distrust, political extremism, and political violence. He reviews some of the ongoing literature on online misinformation and disinformation in driving these and other adverse outcomes.
  • Using data on incidents of terrorism from the Global Terrorism Database and three independent measures of disinformation derived from the Digital Society Project, Piazza finds “disinformation propagated through online social media outlets is statistically associated with increases in domestic terrorism in affected countries. The impact of disinformation on terrorism is mediated, significantly and substantially, through increased political polarization.”
  • Piazza notes that his results support other literature that shows the real-world effects of online disinformation. He emphasizes the need for further research and investigation to better understand the issue.

Posetti, Julie, Nermine Aboulez, Kalina Bontcheva, Jackie Harrison, and Silvio Waisbord. “Online violence Against Women Journalists: A Global Snapshot of Incidence and Impacts.” United Nations Educational, Scientific and Cultural Organization, 2020.

  • The survey focuses on incidence, impacts, and responses to online violence against women journalists that are a result of “coordinated disinformation campaigns leveraging misogyny and other forms of hate speech. There were 901 respondents, hailing from 125 countries, and covering various ethnicities.
  • 73% of women journalists reported facing online violence and harassment in the course of their work, suggesting escalating gendered violence against women in online media.
  • The impact of COVID-19 and populist politics is evident in the gender-based harassment and disinformation campaigns, the source of which is traced to political actors (37%) or anonymous/troll accounts (57%).
  • Investigative reporting on gender issues, politics and elections, immigration and human rights abuses, or fake news itself seems to attract online retaliation and targeted disinformation campaigns against the reporters.

Rajeshwari, Rema. “Mob Lynching and Social Media.” Yale Journal of International Affairs, June 1, 2019.

  • District Police Chief of Jogulamba Gadwal, India, and Yale World Fellow (’17) Rema Rajeshwari writes about how misinformation and disinformation are becoming a growing problem and security threat in India. The fake news phenomenon has spread hatred, fueled sectarian tensions, and continues to diminish social trust in society.
  • One example of this can be found in Jogulamba Gadwal, where videos and rumors were spread throughout social media about how the Parthis, a stigmatized tribal group, were committing acts of violence in the village. This led to a series of mob attacks and killings — “thirty-three people were killed in sixty-nine mob attacks since January 2018 due to rumors” — that could be traced to rumors spread on social media.
  • More importantly, however, Rajeshwari elaborates on how self-regulation and local campaigns can be used as an effective intervention for mis/dis-information. As a police officer, Rajeshwari fought a battle that was both online and on the ground, including the formation of a group of “tech-savvy” cops who could monitor local social media content and flag inaccurate and/or malicious posts, and mobilizing local WhatsApp groups alongside village headmen who could encourage community members to not forward fake messages. These interventions effectively combined local traditions and technology to achieve an “early warning-focused deterrence.”

Taylor, Luke. “Covid-19 Misinformation Sparks Threats and Violence against Doctors in Latin America.” BMJ (2020): m3088.

  • Journalist Luke Taylor details the many incidents of how disinformation campaigns across Latin America have resulted in the mistreatment of health care workers during the Coronavirus pandemic. Examining case studies from Mexico and Colombia, Taylor finds that these mis/disinformation campaigns have resulted in health workers receiving death threats and being subject to acts of aggression.
  • One instance of this link between disinformation and acts of aggression are the 47 reported cases of aggression towards health workers in Mexico and 265 reported complaints against health workers as well. The National Council to Prevent Discrimination noted these acts were the result of a loss of trust in government and government institutions, which was further exacerbated by conspiracy theories that circulated WhatsApp and other social media channels.
  • Another example of false narratives can be seen in Colombia, where a politician theorized that a “covid cartel” of doctors were admitting COVID-19 patients to ICUs in order to receive payments (e.g., a cash payment of ~17,000 Columbian pesos for every dead patient with a covid-19 diagnosis). This false narrative of doctors being incentivized to increase beds for COVID-19 patients quickly spread across social media platforms, resulting in many of those who were ill to avoid seeking care. This rumor also led to doctors in Colombia receiving death threats and intimidation acts.

“The Danger of Fake News in Inflaming or Suppressing Social Conflict.” Center for Information Technology and Society — University of California Santa Barbara, n.d.

  • The article provides case studies of how fake news can be used to intensify social conflict for political gains (e.g., by distracting citizens from having a conversation about critical issues and undermining the democratic process).
  • The cases elaborated upon are 1) Pizzagate: a fake news story that linked human trafficking to a presidential candidate and a political party, and ultimately led to a shooting; 2) Russia’s Internet Research Agency: Russian agents created social media accounts to spread fake news that favored Donald Trump during the 2016 election, and even instigated online protests about social issues (e.g., a BLM protest); and 3) Cambridge Analytica: a British company that used unauthorized social media data for sensationalistic and inflammatory targeted US political advertisements.
  • Notably, it points out that fake news undermines a citizen’s ability to participate in the democratic process and make accurate decisions in important elections.

Tworek, Heidi. “Disinformation: It’s History.” Center for International Governance Innovation, July 14, 2021.

  • While some public narratives frame online disinformation and its influence on real-world violence as “unprecedented and unparalleled” to occurrences in the past. Professor Heidi Tworek of the University of British Columbia points out that “assumptions about the history of disinformation” have (and continue to) influence policymaking to combat fake news. She argues that today’s unprecedented events are rooted in tactics similar to those of the past, such as how Finnish policymakers invested in national communications strategy to fight foreign disinformation coming from Russia and the Soviet Union.
  • She emphasizes the power of learning from historical events to guide modern methods of fighting political misinformation. Connecting today’s concerns of election fraud, foreign interference, and conspiracy theories to those of the past, such as “funding magazines [and] spreading rumors” on Soviet and American practices during the Cold War to further anti-opposition sentiment and hatred reinforces that disinformation is a long-standing problem.

Ward, Megan, and Jessica Beyer. “Vulnerable Landscapes: Case Studies of Violence and Disinformation” Wilson Center, August 2019.

  • This article discusses instances where disinformation inflamed already existing social, political, and ideological cleavages, and ultimately caused violence. Specifically, it elaborates on instances from the US-Mexico border, India, Sri Lanka, and during the course of three Latin American elections.
  • Though the cases are meant to be illustrative and highlight the spread of disinformation globally, the violence in these cases was shown to be affected by the distinct social fabric of each place. Their findings lend credence to the idea that disinformation helped spark violence in places that were already vulnerable and tense.
  • Indeed, now that disinformation can be so quickly distributed using social media, coupled with declining trust in public institutions, low levels of media literacy, meager actions taken by social media companies, and government actors who exploit disinformation for political gain, there has been a rise of these cases globally. It is an interaction of factors such as distrust in traditional media and public institutions, lack of content moderation on social media, and ethnic divides that render societies vulnerable and susceptible to violence.
  • One example of this is at the US/Mexico border, where disinformation campaigns have built on pre-existing xenophobia, and have led to instances of mob-violence and mass shootings. Inflamed by disinformation campaigns that migrant caravans contain criminals (e.g., invasion narratives often used to describe migrant caravans), the armed group United Constitutional Patriots (UCP) impersonated law enforcement and detained migrants at the US border, often turning them over to border officials. UCP has since been arrested by the FBI for impersonating law enforcement.

We welcome other sources we may have missed — please share any suggested additions with us at datastewards [at] thegovlab.org or The GovLab on Twitter.

Digital Technology, Politics, and Policy-Making


Open access book by Fabrizio Gilardi: “The rise of digital technology has been the best of times, and also the worst, a roller coaster of hopes and fears: “social media have gone—in the popular imagination at least—from being a way for pro-democratic forces to fight autocrats to being a tool of outside actors who want to attack democracies” (Tucker et al., 2017, 47). The 2016 US presidential election raised fundamental questions regarding the compatibility of the internet with democracy (Persily, 2017). The divergent assessments of the promises and risks of digital technology has to do, in part, with the fact that it has become such a pervasive phenomenon. Whether digital technology is, on balance, a net benefit or harm for democratic processes and institutions depends on which specific aspects we focus on. Moreover, the assessment is not value neutral, because digital technology has become inextricably linked with our politics. As Farrell (2012, 47) argued a few years ago, “[a]s the Internet becomes politically normalized, it will be ever less appropriate to study it in isolation but ever more important to think clearly, and carefully, about its relationship to politics.” Reflecting on this issue requires going beyond the headlines, which tend to focus on the most dramatic concerns and may have a negativity bias common in news reporting in general. The shortage of hard facts in this area, linked to the singular challenges of studying the connection between digital technology and politics, exacerbates the problem.
Since it affects virtually every aspect of politic and policy-making, the nature and effects of digital technology have been studied from many different angles in increasingly fragmented literatures. For example, studies of disinformation and social media usually do not acknowledge research on the usage of artificial intelligence in public administration—for good reasons, because such is the nature of specialized academic research. Similarly, media attention tends to concentrate on the most newsworthy aspects, such as the role of Facebook in elections, without connecting them to other related phenomena. The compartmentalization of academic and public attention in this area is understandable, but it obscures the relationships that exist among the different parts. Moreover, the fact that scholarly and media attention are sometimes out of sync might lead policy-makers to focus on solutions before there is a scientific consensus on the nature and scale of the problems. For example, policy-makers may emphasize curbing “fake news” while there is still no agreement in the research community about its effects on political outcomes…(More)”.

Mathematicians are deploying algorithms to stop gerrymandering


Article by Siobhan Roberts: “The maps for US congressional and state legislative races often resemble electoral bestiaries, with bizarrely shaped districts emerging from wonky hybrids of counties, precincts, and census blocks.

It’s the drawing of these maps, more than anything—more than voter suppression laws, more than voter fraud—that determines how votes translate into who gets elected. “You can take the same set of votes, with different district maps, and get very different outcomes,” says Jonathan Mattingly, a mathematician at Duke University in the purple state of North Carolina. “The question is, if the choice of maps is so important to how we interpret these votes, which map should we choose, and how should we decide if someone has done a good job in choosing that map?”

Over recent months, Mattingly and like-minded mathematicians have been busy in anticipation of a data release expected today, August 12, from the US Census Bureau. Every decade, new census data launches the decennial redistricting cycle—state legislators (or sometimes appointed commissions) draw new maps, moving district lines to account for demographic shifts.

In preparation, mathematicians are sharpening new algorithms—open-source tools, developed over recent years—that detect and counter gerrymandering, the egregious practice giving rise to those bestiaries, whereby politicians rig the maps and skew the results to favor one political party over another. Republicans have openly declared that with this redistricting cycle they intend to gerrymander a path to retaking the US House of Representatives in 2022….(More)”.

Selected Readings on the Use of Artificial Intelligence in the Public Sector


By Kateryna Gazaryan and Uma Kalkar

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works focuses on algorithms and artificial intelligence in the public sector.

As Artificial Intelligence becomes more developed, governments have turned to it to improve the speed and quality of public sector service delivery, among other objectives. Below, we provide a selection of recent literature that examines how the public sector has adopted AI to serve constituents and solve public problems. While the use of AI in governments can cut down costs and administrative work, these technologies are often early in development and difficult for organizations to understand and control with potential harmful effects as a result. As such, this selected reading explores not only the use of artificial intelligence in governance but also its benefits, and its consequences.

Readings are listed in alphabetical order.

Berryhill, Jamie, Kévin Kok Heang, Rob Clogher, and Keegan McBride. “Hello, World: Artificial intelligence and its use in the public sector.OECD Working Papers on Public Governance no. 36 (2019): https://doi.org/10.1787/726fd39d-en.

This working paper emphasizes the importance of defining AI for the public sector and outlining use cases of AI within governments. It provides a map of 50 countries that have implemented or set in motion the development of AI strategies and highlights where and how these initiatives are cross-cutting, innovative, and dynamic. Additionally, the piece provides policy recommendations governments should consider when exploring public AI strategies to adopt holistic and humanistic approaches.

Kuziemski, Maciej, and Gianluca Misuraca. “AI Governance in the Public Sector: Three Tales from the Frontiers of Automated Decision-Making in Democratic Settings.” Telecommunications Policy 44, no. 6 (2020): 101976. 

Kuziemski and Misuraca explore how the use of artificial intelligence in the public sector can exacerbate existing power imbalances between the public and the government. They consider the European Union’s artificial intelligence “governance and regulatory frameworks” and compare these policies with those of Canada, Finland, and Poland. Drawing on previous scholarship, the authors outline the goals, drivers, barriers, and risks of incorporating artificial intelligence into public services and assess existing regulations against these factors. Ultimately, they find that the “current AI policy debate is heavily skewed towards voluntary standards and self-governance” while minimizing the influence of power dynamics between governments and constituents. 

Misuraca, Gianluca, and Colin van Noordt. “AI Watch, Artificial Intelligence in Public Services: Overview of the Use and Impact of AI in Public Services in the EU.” 30255 (2020).

This study provides “evidence-based scientific support” for the European Commission as it navigates AI regulation via an overview of ways in which European Union member-states use AI to enhance their public sector operations. While AI has the potential to positively disrupt existing policies and functionalities, this report finds gaps in how AI gets applied by governments. It suggests the need for further research centered on the humanistic, ethical, and social ramification of AI use and a rigorous risk assessment from a “public-value perspective” when implementing AI technologies. Additionally, efforts must be made to empower all European countries to adopt responsible and coherent AI policies and techniques.

Saldanha, Douglas Morgan Fullin, and Marcela Barbosa da Silva. “Transparency and Accountability of Government Algorithms: The Case of the Brazilian Electronic Voting System.” Cadernos EBAPE.BR 18 (2020): 697–712.

Saldanha and da Silva note that open data and open government revolutions have increased citizen demand for algorithmic transparency. Algorithms are increasingly used by governments to speed up processes and reduce costs, but their black-box  systems and lack of explanability allows them to insert implicit and explicit bias and discrimination into their calculations. The authors conduct a qualitative study of the “practices and characteristics of the transparency and accountability” in the Brazilian e-voting system across seven dimensions: consciousness; access and reparations; accountability; explanation; data origin, privacy and justice; auditing; and validation, precision and tests. They find the Brazilian e-voting system fulfilled the need to inform citizens about the benefits and consequences of data collection and algorithm use but severely lacked in demonstrating accountability and opening algorithm processes for citizen oversight. They put forth policy recommendations to increase the e-voting system’s accountability to Brazilians and strengthen auditing and oversight processes to reduce the current distrust in the system.

Sharma, Gagan Deep, Anshita Yadav, and Ritika Chopra. “Artificial intelligence and effective governance: A review, critique and research agenda.Sustainable Futures 2 (2020): 100004.

This paper conducts a systematic review of the literature of how AI is used across different branches of government, specifically, healthcare, information, communication, and technology, environment, transportation, policy making, and economic sectors. Across the 74 papers surveyed, the authors find a gap in the research on selecting and implementing AI technologies, as well as their monitoring and evaluation. They call on future research to assess the impact of AI pre- and post-adoption in governance, along with the risks and challenges associated with the technology.

Tallerås, Kim, Terje Colbjørnsen, Knut Oterholm, and Håkon Larsen. “Cultural Policies, Social Missions, Algorithms and Discretion: What Should Public Service Institutions Recommend?Part of the Lecture Notes in Computer Science book series (2020).

Tallerås et al. examine how the use of algorithms by public services, such as public radio and libraries, influence broader society and culture. For instance, to modernize their offerings, Norway’s broadcasting corporation (NRK) has adopted online platforms similar to popular private streaming services. However, NRK’s filtering process has faced “exposure diversity” problems that narrow recommendations to already popular entertainment and move Norway’s cultural offerings towards a singularity. As a public institution, NRK is required to “fulfill […] some cultural policy goals,” raising the question of how public media services can remain relevant in the era of algorithms fed by “individualized digital culture.” Efforts are currently underway to employ recommendation systems that balance cultural diversity with personalized content relevance that engage individuals and uphold the socio-cultural mission of public media.

Vogl, Thomas, Seidelin Cathrine, Bharath Ganesh, and Jonathan Bright. “Smart Technology and the Emergence of Algorithmic Bureaucracy: Artificial Intelligence in UK Local Authorities.” Public administration review 80, no. 6 (2020): 946–961.

Local governments are using “smart technologies” to create more efficient and effective public service delivery. These tools are twofold: not only do they help the public interact with local authorities, they also streamline the tasks of government officials. To better understand the digitization of local government, the authors conducted surveys, desk research, and in-depth interviews with stakeholders from local British governments to understand reasoning, processes, and experiences within a changing government framework. Vogl et al. found an increase in “algorithmic bureaucracy” at the local level to reduce administrative tasks for government employees, generate feedback loops, and use data to enhance services. While the shift toward digital local government demonstrates initiatives to utilize emerging technology for public good, further research is required to determine which demographics are not involved in the design and implementation of smart technology services and how to identify and include these audiences.

Wirtz, Bernd W., Jan C. Weyerer, and Carolin Geyer. “Artificial intelligence and the public sector—Applications and challenges.International Journal of Public Administration 42, no. 7 (2019): 596-615.

The authors provide an extensive review of the existing literature on AI uses and challenges in the public sector to identify the gaps in current applications. The developing nature of AI in public service has led to differing definitions of what constitutes AI and what are the risks and benefits it poses to the public. As well, the authors note the lack of focus on the downfalls of AI in governance, with studies tending to primarily focus on the positive aspects of the technology. From this qualitative analysis, the researchers highlight ten AI applications: knowledge management, process automation, virtual agents, predictive analytics and data visualization, identity analytics, autonomous systems, recommendation systems, digital assistants, speech analytics, and threat intelligence. As well, they note four challenge dimensions—technology implementation, laws and regulation, ethics, and society. From these applications and risks, Wirtz et al. provide a “checklist for public managers” to make informed decisions on how to integrate AI into their operations. 

Wirtz, Bernd W., Jan C. Weyerer, and Benjamin J. Sturm. “The dark sides of artificial intelligence: An integrated AI governance framework for public administration.International Journal of Public Administration 43, no. 9 (2020): 818-829.

As AI is increasingly popularized and picked up by governments, Wirtz et al. highlight the lack of research on the challenges and risks—specifically, privacy and security—associated with implementing AI systems in the public sector. After assessing existing literature and uncovering gaps in the main governance frameworks, the authors outline the three areas of challenges of public AI: law and regulations, society, and ethics. Last, they propose an “integrated AI governance framework” that takes into account the risks of AI for a more holistic “big picture” approach to AI in the public sector.

Zuiderwijk, Anneke, Yu-Che Chen, and Fadi Salem. “Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda.Government Information Quarterly (2021): 101577.

Following a literature review on the risks and possibilities of AI in the public sector, Zuiderwijk, Chen, and Salem design a research agenda centered around the “implications of the use of AI for public governance.” The authors provide eight process recommendations, including: avoiding superficial buzzwords in research; conducting domain- and locality-specific research on AI in governance; shifting from qualitative analysis to diverse research methods; applying private sector “practice-driven research” to public sector study; furthering quantitative research on AI use by governments; creating “explanatory research designs”; sharing data for broader study; and adopting multidisciplinary reference theories. Further, they note the need for scholarship to delve into best practices, risk management, stakeholder communication, multisector use, and impact assessments of AI in the public sector to help decision-makers make informed decisions on the introduction, implementation, and oversight of AI in the public sector.

Many in U.S., Western Europe Say Their Political System Needs Major Reform


Pew Research Center: “A four-nation Pew Research Center survey conducted in November and December of 2020 finds that roughly two-thirds of adults in France and the U.S., as well as about half in the United Kingdom, believe their political system needs major changes or needs to be completely reformed. Calls for significant reform are less common in Germany, where about four-in-ten express this view….

In all four countries, there is considerable interest in political reforms that would potentially allow ordinary citizens to have more power over policymaking. Citizen assemblies, or forums where citizens chosen at random debate issues of national importance and make recommendations about what should be done, are overwhelmingly popular. Around three-quarters or more in each country say it is very or somewhat important for the national government to create citizen assemblies. About four-in-ten say it’s very important. Such processes are in use nationally in France and the UK to debate climate change policy, and they have become increasingly common in nations around the world in recent years.

Chart showing citizen assemblies and referendums are popular ideas in all four countries

Citizen assemblies are popular across the ideological spectrum but are especially so among people who place themselves on the political left.1 Those who think their political system needs significant reform are also particularly likely to say it is important to create citizen assemblies.

There are also high levels of support for allowing citizens to vote directly to decide what becomes law for some key issues. About seven-in-ten in the U.S., Germany and France say it is important, in line with previous findings about support for direct democracy. In the UK, where crucial issues such as Scottish independence and Brexit were decided by referendum, support is somewhat lower – 63% say it is important for the government to use referendums to decide some key issues, and just 27% rate this as very important.

These are among the findings of a new Pew Research Center survey conducted from Nov. 10 to Dec. 23, 2020, among 4,069 adults in the France, Germany, the UK and the U.S. This report also includes findings from 26 focus groups conducted in 2019 in the U.S. and UK….(More)”.