“If Everybody’s White, There Can’t Be Any Racial Bias”: The Disappearance of Hispanic Drivers From Traffic Records


Article by Richard A. Webster: “When sheriff’s deputies in Jefferson Parish, Louisiana, pulled over Octavio Lopez for an expired inspection tag in 2018, they wrote on his traffic ticket that he is white. Lopez, who is from Nicaragua, is Hispanic and speaks only Spanish, said his wife.

In fact, of the 167 tickets issued by deputies to drivers with the last name Lopez over a nearly six-year span, not one of the motorists was labeled as Hispanic, according to records provided by the Jefferson Parish clerk of court. The same was true of the 252 tickets issued to people with the last name of Rodriguez, 234 named Martinez, 223 with the last name Hernandez and 189 with the surname Garcia.

This kind of misidentification is widespread — and not without harm. Across America, law enforcement agencies have been accused of targeting Hispanic drivers, failing to collect data on those traffic stops, and covering up potential officer misconduct and aggressive immigration enforcement by identifying people as white on tickets.

“If everybody’s white, there can’t be any racial bias,” Frank Baumgartner, a political science professor at the University of North Carolina of Chapel Hill, told WWNO/WRKF and ProPublica.

Nationally, states have tried to patch this data loophole and tighten controls against racial profiling. In recent years, legislators have passed widely hailed traffic stop data-collection laws in California, Colorado, Illinois, Oregon, Virginia and Washington, D.C. This April, Alabama became the 22nd state to enact similar legislation.

Though Louisiana has had its own data-collection requirement for two decades, it contains a loophole unlike any other state: It exempts law enforcement agencies from collecting and delivering data to the state if they have an anti-racial-profiling policy in place. This has rendered the law essentially worthless, said Josh Parker, a senior staff attorney at the Policing Project, a public safety research nonprofit at the New York University School of Law.

Louisiana State Rep. Royce Duplessis, D-New Orleans, attempted to remove the exemption two years ago, but law enforcement agencies protested. Instead, he was forced to convene a task force to study the issue, which thus far hasn’t produced any results, he said.

“They don’t want the data because they know what it would reveal,” Duplessis said of law enforcement agencies….(More)”.

NativeDATA


About: “NativeDATA is a free online resource that offers practical guidance for Tribes and Native-serving organizations. For this resource, Native-serving organizations includes Tribal and urban Indian organizations and Tribal Epidemiology Centers (TECs). 

Tribal and urban Indian communities need correct health information (data), so that community leaders can:

  • Watch disease trends
  • Respond to health threats
  • Create useful health policies…

Throughout, this resource offers practical guidance for obtaining and sharing health data in ways that honor Tribal sovereignty, data sovereignty, and public health authorityis the authority of a sovereign government to protect the health, safety, and welfare of its citizens. As sovereign nations, Tribes have the power to define how they will use this authority to protect and promote the health of their communities. The federal government recognizes Tribes and Tribal Epidemiology Centers (TECs) as public health authorities under federal law. More.

Inside you will find expert advice to help you:

Evaluation Guidelines for Representative Deliberative Processes


OECD Report: “Evaluations of representative deliberative processes do not happen regularly, not least due to the lack of specific guidance for their evaluation. To respond to this need, together with an expert advisory group, the OECD has developed Evaluation Guidelines for Representative Deliberative Processes. They aim to encourage public authorities, organisers, and evaluators to conduct more comprehensive, objective, and comparable evaluations.

These evaluation guidelines establish minimum standards and criteria for the evaluation of representative deliberative processes as a foundation on which more comprehensive evaluations can be built by adding additional criteria according to specific contexts and needs.

The guidelines suggest that independent evaluations are the most comprehensive and reliable way of evaluating a deliberative process.

For smaller and shorter deliberative processes, evaluation in the form of self-reporting by members and/or organisers of a deliberative process can also contribute to the learning process…(More)”.

Adopting Agile in State and Local Governments


Report by Sukumar Ganapati: “Agile emerged initially as a set of values and principles for software development formalized in 2001 with the Agile Manifesto. For two decades, it helped revolutionize software development. Today, Agile approaches have been adapted to government services beyond software development, offering a new way of thinking and delivering in areas such as project management, policymaking, human resources, and procurement. 

The basics of Agile and associated methods have been covered in previous IBM Center for The Business of Government reports. These reports provide a good overview of Agile principles, use of Lean, and application of user-centered design. They provide insights into the evolution of Agile adoption in public sector over the last two decades. This new report, Adopting Agile in State and Local Governments, by Sukumar Ganapati of Florida International University, examines the adoption of Agile among state and local governments. State and local governments have increasingly adopted Agile methods in the last decade, applying them across a range of applications. At the same time, they vary widely in terms of their maturity levels in the adoption and implementation.

Professor Ganapati identifies three broad phases in this lifecycle of Agile maturity among public agencies in general.  The three phases are not clear cut, with distinctive breaks between where one phase ends and the next one begins. Rather, they could be conceived as a continuum, as public agencies evolve through the lifecycle of implementing Agile. The report highlights the evolution of the use of Agile methods in two states (Connecticut and California) and two local cities (New York and Austin). The cases show the rich contextual evolution of Agile and how the methods are applied for using technology to streamline enterprise processes and to address social policy problems. The four case studies show different trajectories of adopting Agile in state and local governments. The strategies for adopting and implementing Agile methods broadly differ in the three lifecycle phases of infancy, adolescence, and adulthood. The case studies offer lessons for enabling strategies to adopt Agile across these three phases…(More)”.

Institutionalizing deliberative mini-publics? Issues of legitimacy and power for randomly selected assemblies in political systems


Paper by Dimitri Courant: “Randomly selected deliberative mini-publics (DMPs) are on the rise globally. However, they remain ad hoc, opening the door to arbitrary manoeuvre and triggering a debate on their future institutionalization. What are the competing proposals aiming at institutionalizing DMPs within political systems? I suggest three ways for thinking about institutionalization: in terms of temporality, of legitimacy and support, and of power and role within a system. First, I analyze the dimension of time and how this affect DMP institutional designs. Second, I argue that because sortition produces ‘weak representatives’ with ‘humility-legitimacy’, mini-publics hardly ever make binding decisions and need to rely on external sources of legitimacies. Third, I identify four institutional models, relying on opposing views of legitimacy and politics: tamed consultation, radical democracy, representative klerocracy and hybrid polyarchy. They differ in whether mini-publics are interpreted as tools: for legitimizing elected officials; to give power to the people; or as a mean to suppress voting…(More)”.

The “Onion Model”: A Layered Approach to Documenting How the Third Wave of Open Data Can Provide Societal Value


Blog post by Andrew Zahuranec, Andrew Young and Stefaan Verhulst: “There’s a lot that goes into data-driven decision-making. Behind the datasets, platforms, and analysts is a complex series of processes that inform what kinds of insight data can produce and what kinds of ends it can achieve. These individual processes can be hard to understand when viewed together but, by separating the stages out, we can not only track how data leads to decisions but promote better and more impactful data management.

Earlier this year, The Open Data Policy Lab published the Third Wave of Open Data Toolkit to explore the elements of data re-use. At the center of this toolkit was an abstraction that we call the Open Data Framework. Divided into individual, onion-like layers, the framework shows all the processes that go into capitalizing on data in the third wave, starting with the creation of a dataset through data collaboration, creating insights, and using those insights to produce value.

This blog tries to re-iterate what’s included in each layer of this data “onion model” and demonstrate how organizations can create societal value by making their data available for re-use by other parties….(More)”.

The Myth of the Laboratories of Democracy


Paper by Charles Tyler and Heather Gerken: “A classic constitutional parable teaches that our federal system of government allows the American states to function as “laboratories of democracy.” This tale has been passed down from generation to generation, often to justify constitutional protections for state autonomy from the federal government. But scholars have failed to explain how state governments manage to overcome numerous impediments to experimentation, including re-source scarcity, free-rider problems, and misaligned incentives.

This Article maintains that the laboratories account is missing a proper appreciation for the coordinated networks of third-party organizations (such as interest groups, activists, and funders) that often fuel policy innovation. These groups are the real laboratories of democracy today, as they perform the lion’s share of tasks necessary to enact new policies; they create incentives that motivate elected officials to support their preferred policies; and they mobilize the power of the federal government to change the land-scape against which state experimentation occurs. If our federal system of government seeks to encourage policy experimentation, this insight has several implications for legal doctrine. At a high level of generality, courts should endeavor to create ground rules for regulating competition between political networks, rather than continuing futile efforts to protect state autonomy. The Article concludes by sketching the outlines of this approach in several areas of legal doctrine, including federal preemption of state law, conditional spending, and the anti-commandeering principle….(More)”

Selected Readings on the Use of Artificial Intelligence in the Public Sector


By Kateryna Gazaryan and Uma Kalkar

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works focuses on algorithms and artificial intelligence in the public sector.

As Artificial Intelligence becomes more developed, governments have turned to it to improve the speed and quality of public sector service delivery, among other objectives. Below, we provide a selection of recent literature that examines how the public sector has adopted AI to serve constituents and solve public problems. While the use of AI in governments can cut down costs and administrative work, these technologies are often early in development and difficult for organizations to understand and control with potential harmful effects as a result. As such, this selected reading explores not only the use of artificial intelligence in governance but also its benefits, and its consequences.

Readings are listed in alphabetical order.

Berryhill, Jamie, Kévin Kok Heang, Rob Clogher, and Keegan McBride. “Hello, World: Artificial intelligence and its use in the public sector.OECD Working Papers on Public Governance no. 36 (2019): https://doi.org/10.1787/726fd39d-en.

This working paper emphasizes the importance of defining AI for the public sector and outlining use cases of AI within governments. It provides a map of 50 countries that have implemented or set in motion the development of AI strategies and highlights where and how these initiatives are cross-cutting, innovative, and dynamic. Additionally, the piece provides policy recommendations governments should consider when exploring public AI strategies to adopt holistic and humanistic approaches.

Kuziemski, Maciej, and Gianluca Misuraca. “AI Governance in the Public Sector: Three Tales from the Frontiers of Automated Decision-Making in Democratic Settings.” Telecommunications Policy 44, no. 6 (2020): 101976. 

Kuziemski and Misuraca explore how the use of artificial intelligence in the public sector can exacerbate existing power imbalances between the public and the government. They consider the European Union’s artificial intelligence “governance and regulatory frameworks” and compare these policies with those of Canada, Finland, and Poland. Drawing on previous scholarship, the authors outline the goals, drivers, barriers, and risks of incorporating artificial intelligence into public services and assess existing regulations against these factors. Ultimately, they find that the “current AI policy debate is heavily skewed towards voluntary standards and self-governance” while minimizing the influence of power dynamics between governments and constituents. 

Misuraca, Gianluca, and Colin van Noordt. “AI Watch, Artificial Intelligence in Public Services: Overview of the Use and Impact of AI in Public Services in the EU.” 30255 (2020).

This study provides “evidence-based scientific support” for the European Commission as it navigates AI regulation via an overview of ways in which European Union member-states use AI to enhance their public sector operations. While AI has the potential to positively disrupt existing policies and functionalities, this report finds gaps in how AI gets applied by governments. It suggests the need for further research centered on the humanistic, ethical, and social ramification of AI use and a rigorous risk assessment from a “public-value perspective” when implementing AI technologies. Additionally, efforts must be made to empower all European countries to adopt responsible and coherent AI policies and techniques.

Saldanha, Douglas Morgan Fullin, and Marcela Barbosa da Silva. “Transparency and Accountability of Government Algorithms: The Case of the Brazilian Electronic Voting System.” Cadernos EBAPE.BR 18 (2020): 697–712.

Saldanha and da Silva note that open data and open government revolutions have increased citizen demand for algorithmic transparency. Algorithms are increasingly used by governments to speed up processes and reduce costs, but their black-box  systems and lack of explanability allows them to insert implicit and explicit bias and discrimination into their calculations. The authors conduct a qualitative study of the “practices and characteristics of the transparency and accountability” in the Brazilian e-voting system across seven dimensions: consciousness; access and reparations; accountability; explanation; data origin, privacy and justice; auditing; and validation, precision and tests. They find the Brazilian e-voting system fulfilled the need to inform citizens about the benefits and consequences of data collection and algorithm use but severely lacked in demonstrating accountability and opening algorithm processes for citizen oversight. They put forth policy recommendations to increase the e-voting system’s accountability to Brazilians and strengthen auditing and oversight processes to reduce the current distrust in the system.

Sharma, Gagan Deep, Anshita Yadav, and Ritika Chopra. “Artificial intelligence and effective governance: A review, critique and research agenda.Sustainable Futures 2 (2020): 100004.

This paper conducts a systematic review of the literature of how AI is used across different branches of government, specifically, healthcare, information, communication, and technology, environment, transportation, policy making, and economic sectors. Across the 74 papers surveyed, the authors find a gap in the research on selecting and implementing AI technologies, as well as their monitoring and evaluation. They call on future research to assess the impact of AI pre- and post-adoption in governance, along with the risks and challenges associated with the technology.

Tallerås, Kim, Terje Colbjørnsen, Knut Oterholm, and Håkon Larsen. “Cultural Policies, Social Missions, Algorithms and Discretion: What Should Public Service Institutions Recommend?Part of the Lecture Notes in Computer Science book series (2020).

Tallerås et al. examine how the use of algorithms by public services, such as public radio and libraries, influence broader society and culture. For instance, to modernize their offerings, Norway’s broadcasting corporation (NRK) has adopted online platforms similar to popular private streaming services. However, NRK’s filtering process has faced “exposure diversity” problems that narrow recommendations to already popular entertainment and move Norway’s cultural offerings towards a singularity. As a public institution, NRK is required to “fulfill […] some cultural policy goals,” raising the question of how public media services can remain relevant in the era of algorithms fed by “individualized digital culture.” Efforts are currently underway to employ recommendation systems that balance cultural diversity with personalized content relevance that engage individuals and uphold the socio-cultural mission of public media.

Vogl, Thomas, Seidelin Cathrine, Bharath Ganesh, and Jonathan Bright. “Smart Technology and the Emergence of Algorithmic Bureaucracy: Artificial Intelligence in UK Local Authorities.” Public administration review 80, no. 6 (2020): 946–961.

Local governments are using “smart technologies” to create more efficient and effective public service delivery. These tools are twofold: not only do they help the public interact with local authorities, they also streamline the tasks of government officials. To better understand the digitization of local government, the authors conducted surveys, desk research, and in-depth interviews with stakeholders from local British governments to understand reasoning, processes, and experiences within a changing government framework. Vogl et al. found an increase in “algorithmic bureaucracy” at the local level to reduce administrative tasks for government employees, generate feedback loops, and use data to enhance services. While the shift toward digital local government demonstrates initiatives to utilize emerging technology for public good, further research is required to determine which demographics are not involved in the design and implementation of smart technology services and how to identify and include these audiences.

Wirtz, Bernd W., Jan C. Weyerer, and Carolin Geyer. “Artificial intelligence and the public sector—Applications and challenges.International Journal of Public Administration 42, no. 7 (2019): 596-615.

The authors provide an extensive review of the existing literature on AI uses and challenges in the public sector to identify the gaps in current applications. The developing nature of AI in public service has led to differing definitions of what constitutes AI and what are the risks and benefits it poses to the public. As well, the authors note the lack of focus on the downfalls of AI in governance, with studies tending to primarily focus on the positive aspects of the technology. From this qualitative analysis, the researchers highlight ten AI applications: knowledge management, process automation, virtual agents, predictive analytics and data visualization, identity analytics, autonomous systems, recommendation systems, digital assistants, speech analytics, and threat intelligence. As well, they note four challenge dimensions—technology implementation, laws and regulation, ethics, and society. From these applications and risks, Wirtz et al. provide a “checklist for public managers” to make informed decisions on how to integrate AI into their operations. 

Wirtz, Bernd W., Jan C. Weyerer, and Benjamin J. Sturm. “The dark sides of artificial intelligence: An integrated AI governance framework for public administration.International Journal of Public Administration 43, no. 9 (2020): 818-829.

As AI is increasingly popularized and picked up by governments, Wirtz et al. highlight the lack of research on the challenges and risks—specifically, privacy and security—associated with implementing AI systems in the public sector. After assessing existing literature and uncovering gaps in the main governance frameworks, the authors outline the three areas of challenges of public AI: law and regulations, society, and ethics. Last, they propose an “integrated AI governance framework” that takes into account the risks of AI for a more holistic “big picture” approach to AI in the public sector.

Zuiderwijk, Anneke, Yu-Che Chen, and Fadi Salem. “Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda.Government Information Quarterly (2021): 101577.

Following a literature review on the risks and possibilities of AI in the public sector, Zuiderwijk, Chen, and Salem design a research agenda centered around the “implications of the use of AI for public governance.” The authors provide eight process recommendations, including: avoiding superficial buzzwords in research; conducting domain- and locality-specific research on AI in governance; shifting from qualitative analysis to diverse research methods; applying private sector “practice-driven research” to public sector study; furthering quantitative research on AI use by governments; creating “explanatory research designs”; sharing data for broader study; and adopting multidisciplinary reference theories. Further, they note the need for scholarship to delve into best practices, risk management, stakeholder communication, multisector use, and impact assessments of AI in the public sector to help decision-makers make informed decisions on the introduction, implementation, and oversight of AI in the public sector.

New York vs Big Tech: Lawmakers Float Data Tax in Privacy Push


GovTech article: “While New York is not the first state to propose data privacy legislation, it is the first to propose a data privacy bill that would implement a tax on big tech companies that benefit from the sale of New Yorkers’ consumer data.

Known as the Data Economy Labor Compensation and Accountability Act, the bill looks to enact a 2 percent tax on annual receipts earned off New York residents’ data. This tax and other rules and regulations aimed at safeguarding citizens’ data will be enforced by a newly created Office of Consumer Data Protection outlined in the bill.

The office would require all data controllers and processors to register annually in order to meet state compliance requirements. Failure to do so, the bill states, would result in fines.

As for the tax, all funds will be put toward improving education and closing the digital divide.

“The revenue from the tax will be put towards digital literacy, workforce redevelopment, STEAM education (science, technology, engineering, arts and mathematics), K-12 education, workforce reskilling and retraining,” said Sen. Andrew Gounardes, D-22.

As for why the bill is being proposed now, Gounardes said, “Every day, big tech companies like Amazon, Apple, Facebook and Google capitalize on the unpaid labor of billions of people to create their products and services through targeted advertising and artificial intelligence.”…(More)”

Selecting the Most Effective Nudge: Evidence from a Large-Scale Experiment on Immunization


NBER Paper by Abhijit Banerjee et al: “We evaluate a large-scale set of interventions to increase demand for immunization in Haryana, India. The policies under consideration include the two most frequently discussed tools—reminders and incentives—as well as an intervention inspired by the networks literature. We cross-randomize whether (a) individuals receive SMS reminders about upcoming vaccination drives; (b) individuals receive incentives for vaccinating their children; (c) influential individuals (information hubs, trusted individuals, or both) are asked to act as “ambassadors” receiving regular reminders to spread the word about immunization in their community. By taking into account different versions (or “dosages”) of each intervention, we obtain 75 unique policy combinations.

We develop a new statistical technique—a smart pooling and pruning procedure—for finding a best policy from a large set, which also determines which policies are effective and the effect of the best policy. We proceed in two steps. First, we use a LASSO technique to collapse the data: we pool dosages of the same treatment if the data cannot reject that they had the same impact, and prune policies deemed ineffective. Second, using the remaining (pooled) policies, we estimate the effect of the best policy, accounting for the winner’s curse. The key outcomes are (i) the number of measles immunizations and (ii) the number of immunizations per dollar spent. The policy that has the largest impact (information hubs, SMS reminders, incentives that increase with each immunization) increases the number of immunizations by 44 % relative to the status quo. The most cost-effective policy (information hubs, SMS reminders, no incentives) increases the number of immunizations per dollar by 9.1%….(More)”.