Institutionalizing deliberative mini-publics? Issues of legitimacy and power for randomly selected assemblies in political systems


Paper by Dimitri Courant: “Randomly selected deliberative mini-publics (DMPs) are on the rise globally. However, they remain ad hoc, opening the door to arbitrary manoeuvre and triggering a debate on their future institutionalization. What are the competing proposals aiming at institutionalizing DMPs within political systems? I suggest three ways for thinking about institutionalization: in terms of temporality, of legitimacy and support, and of power and role within a system. First, I analyze the dimension of time and how this affect DMP institutional designs. Second, I argue that because sortition produces ‘weak representatives’ with ‘humility-legitimacy’, mini-publics hardly ever make binding decisions and need to rely on external sources of legitimacies. Third, I identify four institutional models, relying on opposing views of legitimacy and politics: tamed consultation, radical democracy, representative klerocracy and hybrid polyarchy. They differ in whether mini-publics are interpreted as tools: for legitimizing elected officials; to give power to the people; or as a mean to suppress voting…(More)”.

The “Onion Model”: A Layered Approach to Documenting How the Third Wave of Open Data Can Provide Societal Value


Blog post by Andrew Zahuranec, Andrew Young and Stefaan Verhulst: “There’s a lot that goes into data-driven decision-making. Behind the datasets, platforms, and analysts is a complex series of processes that inform what kinds of insight data can produce and what kinds of ends it can achieve. These individual processes can be hard to understand when viewed together but, by separating the stages out, we can not only track how data leads to decisions but promote better and more impactful data management.

Earlier this year, The Open Data Policy Lab published the Third Wave of Open Data Toolkit to explore the elements of data re-use. At the center of this toolkit was an abstraction that we call the Open Data Framework. Divided into individual, onion-like layers, the framework shows all the processes that go into capitalizing on data in the third wave, starting with the creation of a dataset through data collaboration, creating insights, and using those insights to produce value.

This blog tries to re-iterate what’s included in each layer of this data “onion model” and demonstrate how organizations can create societal value by making their data available for re-use by other parties….(More)”.

The Myth of the Laboratories of Democracy


Paper by Charles Tyler and Heather Gerken: “A classic constitutional parable teaches that our federal system of government allows the American states to function as “laboratories of democracy.” This tale has been passed down from generation to generation, often to justify constitutional protections for state autonomy from the federal government. But scholars have failed to explain how state governments manage to overcome numerous impediments to experimentation, including re-source scarcity, free-rider problems, and misaligned incentives.

This Article maintains that the laboratories account is missing a proper appreciation for the coordinated networks of third-party organizations (such as interest groups, activists, and funders) that often fuel policy innovation. These groups are the real laboratories of democracy today, as they perform the lion’s share of tasks necessary to enact new policies; they create incentives that motivate elected officials to support their preferred policies; and they mobilize the power of the federal government to change the land-scape against which state experimentation occurs. If our federal system of government seeks to encourage policy experimentation, this insight has several implications for legal doctrine. At a high level of generality, courts should endeavor to create ground rules for regulating competition between political networks, rather than continuing futile efforts to protect state autonomy. The Article concludes by sketching the outlines of this approach in several areas of legal doctrine, including federal preemption of state law, conditional spending, and the anti-commandeering principle….(More)”

Selected Readings on the Use of Artificial Intelligence in the Public Sector


By Kateryna Gazaryan and Uma Kalkar

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works focuses on algorithms and artificial intelligence in the public sector.

As Artificial Intelligence becomes more developed, governments have turned to it to improve the speed and quality of public sector service delivery, among other objectives. Below, we provide a selection of recent literature that examines how the public sector has adopted AI to serve constituents and solve public problems. While the use of AI in governments can cut down costs and administrative work, these technologies are often early in development and difficult for organizations to understand and control with potential harmful effects as a result. As such, this selected reading explores not only the use of artificial intelligence in governance but also its benefits, and its consequences.

Readings are listed in alphabetical order.

Berryhill, Jamie, Kévin Kok Heang, Rob Clogher, and Keegan McBride. “Hello, World: Artificial intelligence and its use in the public sector.OECD Working Papers on Public Governance no. 36 (2019): https://doi.org/10.1787/726fd39d-en.

This working paper emphasizes the importance of defining AI for the public sector and outlining use cases of AI within governments. It provides a map of 50 countries that have implemented or set in motion the development of AI strategies and highlights where and how these initiatives are cross-cutting, innovative, and dynamic. Additionally, the piece provides policy recommendations governments should consider when exploring public AI strategies to adopt holistic and humanistic approaches.

Kuziemski, Maciej, and Gianluca Misuraca. “AI Governance in the Public Sector: Three Tales from the Frontiers of Automated Decision-Making in Democratic Settings.” Telecommunications Policy 44, no. 6 (2020): 101976. 

Kuziemski and Misuraca explore how the use of artificial intelligence in the public sector can exacerbate existing power imbalances between the public and the government. They consider the European Union’s artificial intelligence “governance and regulatory frameworks” and compare these policies with those of Canada, Finland, and Poland. Drawing on previous scholarship, the authors outline the goals, drivers, barriers, and risks of incorporating artificial intelligence into public services and assess existing regulations against these factors. Ultimately, they find that the “current AI policy debate is heavily skewed towards voluntary standards and self-governance” while minimizing the influence of power dynamics between governments and constituents. 

Misuraca, Gianluca, and Colin van Noordt. “AI Watch, Artificial Intelligence in Public Services: Overview of the Use and Impact of AI in Public Services in the EU.” 30255 (2020).

This study provides “evidence-based scientific support” for the European Commission as it navigates AI regulation via an overview of ways in which European Union member-states use AI to enhance their public sector operations. While AI has the potential to positively disrupt existing policies and functionalities, this report finds gaps in how AI gets applied by governments. It suggests the need for further research centered on the humanistic, ethical, and social ramification of AI use and a rigorous risk assessment from a “public-value perspective” when implementing AI technologies. Additionally, efforts must be made to empower all European countries to adopt responsible and coherent AI policies and techniques.

Saldanha, Douglas Morgan Fullin, and Marcela Barbosa da Silva. “Transparency and Accountability of Government Algorithms: The Case of the Brazilian Electronic Voting System.” Cadernos EBAPE.BR 18 (2020): 697–712.

Saldanha and da Silva note that open data and open government revolutions have increased citizen demand for algorithmic transparency. Algorithms are increasingly used by governments to speed up processes and reduce costs, but their black-box  systems and lack of explanability allows them to insert implicit and explicit bias and discrimination into their calculations. The authors conduct a qualitative study of the “practices and characteristics of the transparency and accountability” in the Brazilian e-voting system across seven dimensions: consciousness; access and reparations; accountability; explanation; data origin, privacy and justice; auditing; and validation, precision and tests. They find the Brazilian e-voting system fulfilled the need to inform citizens about the benefits and consequences of data collection and algorithm use but severely lacked in demonstrating accountability and opening algorithm processes for citizen oversight. They put forth policy recommendations to increase the e-voting system’s accountability to Brazilians and strengthen auditing and oversight processes to reduce the current distrust in the system.

Sharma, Gagan Deep, Anshita Yadav, and Ritika Chopra. “Artificial intelligence and effective governance: A review, critique and research agenda.Sustainable Futures 2 (2020): 100004.

This paper conducts a systematic review of the literature of how AI is used across different branches of government, specifically, healthcare, information, communication, and technology, environment, transportation, policy making, and economic sectors. Across the 74 papers surveyed, the authors find a gap in the research on selecting and implementing AI technologies, as well as their monitoring and evaluation. They call on future research to assess the impact of AI pre- and post-adoption in governance, along with the risks and challenges associated with the technology.

Tallerås, Kim, Terje Colbjørnsen, Knut Oterholm, and Håkon Larsen. “Cultural Policies, Social Missions, Algorithms and Discretion: What Should Public Service Institutions Recommend?Part of the Lecture Notes in Computer Science book series (2020).

Tallerås et al. examine how the use of algorithms by public services, such as public radio and libraries, influence broader society and culture. For instance, to modernize their offerings, Norway’s broadcasting corporation (NRK) has adopted online platforms similar to popular private streaming services. However, NRK’s filtering process has faced “exposure diversity” problems that narrow recommendations to already popular entertainment and move Norway’s cultural offerings towards a singularity. As a public institution, NRK is required to “fulfill […] some cultural policy goals,” raising the question of how public media services can remain relevant in the era of algorithms fed by “individualized digital culture.” Efforts are currently underway to employ recommendation systems that balance cultural diversity with personalized content relevance that engage individuals and uphold the socio-cultural mission of public media.

Vogl, Thomas, Seidelin Cathrine, Bharath Ganesh, and Jonathan Bright. “Smart Technology and the Emergence of Algorithmic Bureaucracy: Artificial Intelligence in UK Local Authorities.” Public administration review 80, no. 6 (2020): 946–961.

Local governments are using “smart technologies” to create more efficient and effective public service delivery. These tools are twofold: not only do they help the public interact with local authorities, they also streamline the tasks of government officials. To better understand the digitization of local government, the authors conducted surveys, desk research, and in-depth interviews with stakeholders from local British governments to understand reasoning, processes, and experiences within a changing government framework. Vogl et al. found an increase in “algorithmic bureaucracy” at the local level to reduce administrative tasks for government employees, generate feedback loops, and use data to enhance services. While the shift toward digital local government demonstrates initiatives to utilize emerging technology for public good, further research is required to determine which demographics are not involved in the design and implementation of smart technology services and how to identify and include these audiences.

Wirtz, Bernd W., Jan C. Weyerer, and Carolin Geyer. “Artificial intelligence and the public sector—Applications and challenges.International Journal of Public Administration 42, no. 7 (2019): 596-615.

The authors provide an extensive review of the existing literature on AI uses and challenges in the public sector to identify the gaps in current applications. The developing nature of AI in public service has led to differing definitions of what constitutes AI and what are the risks and benefits it poses to the public. As well, the authors note the lack of focus on the downfalls of AI in governance, with studies tending to primarily focus on the positive aspects of the technology. From this qualitative analysis, the researchers highlight ten AI applications: knowledge management, process automation, virtual agents, predictive analytics and data visualization, identity analytics, autonomous systems, recommendation systems, digital assistants, speech analytics, and threat intelligence. As well, they note four challenge dimensions—technology implementation, laws and regulation, ethics, and society. From these applications and risks, Wirtz et al. provide a “checklist for public managers” to make informed decisions on how to integrate AI into their operations. 

Wirtz, Bernd W., Jan C. Weyerer, and Benjamin J. Sturm. “The dark sides of artificial intelligence: An integrated AI governance framework for public administration.International Journal of Public Administration 43, no. 9 (2020): 818-829.

As AI is increasingly popularized and picked up by governments, Wirtz et al. highlight the lack of research on the challenges and risks—specifically, privacy and security—associated with implementing AI systems in the public sector. After assessing existing literature and uncovering gaps in the main governance frameworks, the authors outline the three areas of challenges of public AI: law and regulations, society, and ethics. Last, they propose an “integrated AI governance framework” that takes into account the risks of AI for a more holistic “big picture” approach to AI in the public sector.

Zuiderwijk, Anneke, Yu-Che Chen, and Fadi Salem. “Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda.Government Information Quarterly (2021): 101577.

Following a literature review on the risks and possibilities of AI in the public sector, Zuiderwijk, Chen, and Salem design a research agenda centered around the “implications of the use of AI for public governance.” The authors provide eight process recommendations, including: avoiding superficial buzzwords in research; conducting domain- and locality-specific research on AI in governance; shifting from qualitative analysis to diverse research methods; applying private sector “practice-driven research” to public sector study; furthering quantitative research on AI use by governments; creating “explanatory research designs”; sharing data for broader study; and adopting multidisciplinary reference theories. Further, they note the need for scholarship to delve into best practices, risk management, stakeholder communication, multisector use, and impact assessments of AI in the public sector to help decision-makers make informed decisions on the introduction, implementation, and oversight of AI in the public sector.

New York vs Big Tech: Lawmakers Float Data Tax in Privacy Push


GovTech article: “While New York is not the first state to propose data privacy legislation, it is the first to propose a data privacy bill that would implement a tax on big tech companies that benefit from the sale of New Yorkers’ consumer data.

Known as the Data Economy Labor Compensation and Accountability Act, the bill looks to enact a 2 percent tax on annual receipts earned off New York residents’ data. This tax and other rules and regulations aimed at safeguarding citizens’ data will be enforced by a newly created Office of Consumer Data Protection outlined in the bill.

The office would require all data controllers and processors to register annually in order to meet state compliance requirements. Failure to do so, the bill states, would result in fines.

As for the tax, all funds will be put toward improving education and closing the digital divide.

“The revenue from the tax will be put towards digital literacy, workforce redevelopment, STEAM education (science, technology, engineering, arts and mathematics), K-12 education, workforce reskilling and retraining,” said Sen. Andrew Gounardes, D-22.

As for why the bill is being proposed now, Gounardes said, “Every day, big tech companies like Amazon, Apple, Facebook and Google capitalize on the unpaid labor of billions of people to create their products and services through targeted advertising and artificial intelligence.”…(More)”

Selecting the Most Effective Nudge: Evidence from a Large-Scale Experiment on Immunization


NBER Paper by Abhijit Banerjee et al: “We evaluate a large-scale set of interventions to increase demand for immunization in Haryana, India. The policies under consideration include the two most frequently discussed tools—reminders and incentives—as well as an intervention inspired by the networks literature. We cross-randomize whether (a) individuals receive SMS reminders about upcoming vaccination drives; (b) individuals receive incentives for vaccinating their children; (c) influential individuals (information hubs, trusted individuals, or both) are asked to act as “ambassadors” receiving regular reminders to spread the word about immunization in their community. By taking into account different versions (or “dosages”) of each intervention, we obtain 75 unique policy combinations.

We develop a new statistical technique—a smart pooling and pruning procedure—for finding a best policy from a large set, which also determines which policies are effective and the effect of the best policy. We proceed in two steps. First, we use a LASSO technique to collapse the data: we pool dosages of the same treatment if the data cannot reject that they had the same impact, and prune policies deemed ineffective. Second, using the remaining (pooled) policies, we estimate the effect of the best policy, accounting for the winner’s curse. The key outcomes are (i) the number of measles immunizations and (ii) the number of immunizations per dollar spent. The policy that has the largest impact (information hubs, SMS reminders, incentives that increase with each immunization) increases the number of immunizations by 44 % relative to the status quo. The most cost-effective policy (information hubs, SMS reminders, no incentives) increases the number of immunizations per dollar by 9.1%….(More)”.

Tech tools help deepen citizen input in drafting laws abroad and in U.S. states


Gopal Ratnam at RollCall: “Earlier this month, New Jersey’s Department of Education launched a citizen engagement process asking students, teachers and parents to vote on ideas for changes that officials should consider as the state reopens its schools after the pandemic closed classrooms for a year. 

The project, managed by The Governance Lab at New York University’s Tandon School of Engineering, is part of a monthlong nationwide effort using an online survey tool called All Our Ideas to help state education officials prioritize policymaking based on ideas solicited from those who are directly affected by the policies.

Among the thousands of votes cast for various ideas nationwide, teachers and parents backed changes that would teach more problem-solving skills to kids. But students backed a different idea as the most important: making sure that kids have social and emotional skills, as well as “self-awareness and empathy.” 

A government body soliciting ideas from those who are directly affected, via online technology, is one small example of greater citizen participation in governance that advocates hope can grow at both state and federal levels….

Taiwan has taken crowdsourcing legislative ideas to a new height.

Using a variety of open-source engagement and consultation tools that are collectively known as the vTaiwan process, government ministries, elected representatives, experts, civil society groups, businesses and ordinary citizens come together to produce legislation. 

The need for an open consultation process stemmed from the 2014 Sunflower Student Movement, when groups of students and others occupied the Taiwanese parliament to protest the fast-tracking of a trade agreement with China with little public review.  

After the country’s parliament acceded to the demands, the “consensus opinion was that instead of people having to occupy the parliament every time there’s a controversial, emergent issue, it might actually work better if we have a consultation mechanism in the very beginning of the issue rather than at the end,” said Audrey Tang, Taiwan’s digital minister. …

At about the same time that Taiwan’s Sunflower movement was unfolding, in Brazil then-President Dilma Rousseff signed into law the country’s internet bill of rights in April 2014. 

The bill was drafted and refined through a consultative process that included not only legal and technical experts but average citizens as well, said Debora Albu, program coordinator at the Institute for Technology and Society of Rio, also known as ITS. 

The institute was involved in designing the platform for seeking public participation, Albu said. 

“From then onwards, we wanted to continue developing projects that incorporated this idea of collective intelligence built into the development of legislation or public policies,” Albu said….(More)”.

Dialogues about Data: Building trust and unlocking the value of citizens’ health and care data


Nesta Report by Sinead Mac Manus and Alice Clay: “The last decade has seen exponential growth in the amount of data generated, collected and analysed to provide insights across all aspects of industry. Healthcare is no exception. We are increasingly seeing the value of using health and care data to prevent ill health, improve health outcomes for people and provide new insights into disease and treatments.

Bringing together common themes across the existing research, this report sets out two interlinked challenges to building a data-driven health and care system. This is interspersed with best practice examples of the potential of data to improve health and care, as well as cautionary tales of what can happen when this is done badly.

The first challenge we explore is how to increase citizens’ trust and transparency in data sharing. The second challenge is how to unlock the value of health and care data.

We are excited about the role for participatory futures – a set of techniques that systematically engage people to imagine and create more sustainable, inclusive futures – in helping governments and other organisations work with citizens to engage them in debate about their health and care data to build a data-driven health and care system for the benefit of all….(More)”.

New York Temporarily Bans Facial Recognition Technology in Schools


Hunton’s Privacy Blog: “On December 22, 2020, New York Governor Andrew Cuomo signed into law legislation that temporarily bans the use or purchase of facial recognition and other biometric identifying technology in public and private schools until at least July 1, 2022. The legislation also directs the New York Commissioner of Education (the “Commissioner”) to conduct a study on whether this technology is appropriate for use in schools.

In his press statement, Governor Cuomo indicated that the legislation comes after concerns were raised about potential risks to students, including issues surrounding misidentification by the technology as well as safety, security and privacy concerns. “This legislation requires state education policymakers to take a step back, consult with experts and address privacy issues before determining whether any kind of biometric identifying technology can be brought into New York’s schools. The safety and security of our children is vital to every parent, and whether to use this technology is not a decision to be made lightly,” the Governor explained.

Key elements of the legislation include:

  • Defining “facial recognition” as “any tool using an automated or semi-automated process that assists in uniquely identifying or verifying a person by comparing and analyzing patterns based on the person’s face,” and “biometric identifying technology” as “any tool using an automated or semi-automated process that assists in verifying a person’s identity based on a person’s biometric information”;
  • Prohibiting the purchase and use of facial recognition and other biometric identifying technology in all public and private elementary and secondary schools until July 1, 2022, or until the Commissioner authorizes the purchase and use of such technology, whichever occurs later; and
  • Directing the Commissioner, in consultation with New York’s Office of Information Technology, Division of Criminal Justice Services, Education Department’s Chief Privacy Officer and other stakeholders, to conduct a study and make recommendations as to the circumstances in which facial recognition and other biometric identifying technology is appropriate for use in schools and what restrictions and guidelines should be enacted to protect privacy, civil rights and civil liberties interests….(More)”.

Digital Politics in Canada: Promises and Realities


Book edited by Tamara A. Small and Harold J. Jansen: “Digital Politics in Canada addresses a significant gap in the scholarly literature on both media in Canada and Canadian political science. Using a comprehensive, multidisciplinary, historical, and focused analysis of Canadian digital politics, this book covers the full scope of actors in the Canadian political system, including traditional political institutions of the government, elected officials, political parties, and the mass media. At a time when issues of inclusion are central to political debate, this book features timely chapters on Indigenous people, women, and young people, and takes an in-depth look at key issues of online surveillance and internet voting. Ideal for a wide-ranging course on the impact of digital technology on the Canadian political system, this book encourages students to critically engage in discussions about the future of Canadian politics and democracy….(More)”.