How volunteer observers can help protect biodiversity


The Economist: “Ecology lends itself to being helped along by the keen layperson perhaps more than any other science. For decades, birdwatchers have recorded their sightings and sent them to organisations like Britain’s Royal Society for the Protection of Birds, or the Audubon society in America, contributing precious data about population size, trends, behaviour and migration. These days, any smartphone connected to the internet can be pointed at a plant to identify a species and add a record to a regional data set.

Social-media platforms have further transformed things, adding big data to weekend ecology. In 2002, the Cornell Lab of Ornithology in New York created eBird, a free app available in more than 30 languages that lets twitchers upload and share pictures and recordings of birds, labelled by time, location and other criteria. More than 100m sightings are now uploaded annually, and the number is growing by 20% each year. In May the group marked its billionth observation. The Cornell group also runs an audio library with 1m bird calls, and the Merlin app, which uses eBird data to identify species from pictures and descriptions….(More)”.

Sandwich Strategy


Article by the Accountability Research Center: “The “sandwich strategy” describes an interactive process in which reformers in government encourage citizen action from below, driving virtuous circles of mutual empowerment between pro-accountability actors in both state and society.

The sandwich strategy relies on mutually-reinforcing interaction between pro-reform actors in both state and society, not just initiatives from one or the other arena. The hypothesis is that when reformers in government tangibly reduce the risks/costs of collective action, that process can bolster state-society pro-reform coalitions that collaborate for change. While this process makes intuitive sense, it can follow diverse pathways and encounter many roadblocks. The dynamics, strengths and limitations of sandwich strategies have not been documented and analyzed systematically. The figure below shows a possible pathway of convergence and conflict between actors for and against change in both state and society….(More)”.

sandwich strategy

Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade


Report by Pew Research Center: “Artificial intelligence systems “understand” and shape a lot of what happens in people’s lives. AI applications “speak” to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies. They help diagnose cancer and other medical conditions. They scour the use of credit cards for signs of fraud, and they determine who could be a credit risk.

They help people drive from point A to point B and update traffic information to shorten travel times. They are the operating system of driverless vehicles. They sift applications to make recommendations about job candidates. They determine the material that is offered up in people’s newsfeeds and video choices.

They recognize people’s facestranslate languages and suggest how to complete people’s sentences or search queries. They can “read” people’s emotions. They beat them at sophisticated games. They write news stories, paint in the style of Vincent Van Gogh and create music that sounds quite like the Beatles and Bach.

Corporations and governments are charging evermore expansively into AI development. Increasingly, nonprogrammers can set up off-the-shelf, pre-built AI tools as they prefer.

As this unfolds, a number of experts and advocates around the world have become worried about the long-term impact and implications of AI applications. They have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will. Dozens of convenings and study groups have issued papers proposing what the tenets of ethical AI design should be, and government working teams have tried to address these issues. In light of this, Pew Research Center and Elon University’s Imagining the Internet Center asked experts where they thought efforts aimed at creating ethical artificial intelligence would stand in the year 2030….(More)”

Crisis Innovation Policy from World War II to COVID-19


Paper by Daniel P. Gross & Bhaven N. Sampat: “Innovation policy can be a crucial component of governments’ responses to crises. Because speed is a paramount objective, crisis innovation may also require different policy tools than innovation policy in non-crisis times, raising distinct questions and tradeoffs. In this paper, we survey the U.S. policy response to two crises where innovation was crucial to a resolution: World War II and the COVID-19 pandemic. After providing an overview of the main elements of each of these efforts, we discuss how they compare, and to what degree their differences reflect the nature of the central innovation policy problems and the maturity of the U.S. innovation system. We then explore four key tradeoffs for crisis innovation policy—top-down vs. bottom-up priority setting, concentrated vs. distributed funding, patent policy, and managing disruptions to the innovation system—and provide a logic for policy choices. Finally, we describe the longer-run impacts of the World War II effort and use these lessons to speculate on the potential long-run effects of the COVID-19 crisis on innovation policy and the innovation system….(More)”.

Citizens ‘on mute’ in digital public service delivery


Blog by Sarah Giest at Data and Policy: “Various countries are digitalizing their welfare system in the larger context of austerity considerations and fraud detection goals, but these changes are increasingly under scrutiny. In short, digitalization of the welfare system means that with the help of mathematical models, data and/or the combination of different administrative datasets, algorithms issue a decision on, for example, an application for social benefits (Dencik and Kaun 2020).

Several examples exist where such systems have led to unfair treatment of welfare recipients. In Europe, the Dutch SyRI system has been banned by court, due to human rights violations in the profiling of welfare recipients, and the UK has found errors in the automated processes leading to financial hardship among citizens. In the United States and Canada, automated systems led to false underpayment or denial of benefits. A recent UN report (2019) even warns that countries are ‘stumbling zombie-like into a digital welfare dystopia’. Further, studies raise alarm that this process of digitalization is done in a way that it not only creates excessive information asymmetry among government and citizens, but also disadvantages certain groups more than others.

A closer look at the Dutch Childcare Allowance case highlights this. In this example, low-income parents were regarded as fraudsters by the Tax Authorities if they had incorrectly filled out any documents. An automated and algorithm-based procedure then also singled out dual-nationality families. The victims lost their allowance without having been given any reasons. Even worse, benefits already received were reclaimed. This led to individual hardship, where financial troubles and the categorization as a fraudster by government led for citizens to a chain of events from unpaid healthcare insurance and the inability to visit a doctor to job loss, potential home loss and mental health concerns (Volkskrant 2020)….(More)”.

Selected Readings on the Use of Artificial Intelligence in the Public Sector


By Kateryna Gazaryan and Uma Kalkar

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works focuses on algorithms and artificial intelligence in the public sector.

As Artificial Intelligence becomes more developed, governments have turned to it to improve the speed and quality of public sector service delivery, among other objectives. Below, we provide a selection of recent literature that examines how the public sector has adopted AI to serve constituents and solve public problems. While the use of AI in governments can cut down costs and administrative work, these technologies are often early in development and difficult for organizations to understand and control with potential harmful effects as a result. As such, this selected reading explores not only the use of artificial intelligence in governance but also its benefits, and its consequences.

Readings are listed in alphabetical order.

Berryhill, Jamie, Kévin Kok Heang, Rob Clogher, and Keegan McBride. “Hello, World: Artificial intelligence and its use in the public sector.OECD Working Papers on Public Governance no. 36 (2019): https://doi.org/10.1787/726fd39d-en.

This working paper emphasizes the importance of defining AI for the public sector and outlining use cases of AI within governments. It provides a map of 50 countries that have implemented or set in motion the development of AI strategies and highlights where and how these initiatives are cross-cutting, innovative, and dynamic. Additionally, the piece provides policy recommendations governments should consider when exploring public AI strategies to adopt holistic and humanistic approaches.

Kuziemski, Maciej, and Gianluca Misuraca. “AI Governance in the Public Sector: Three Tales from the Frontiers of Automated Decision-Making in Democratic Settings.” Telecommunications Policy 44, no. 6 (2020): 101976. 

Kuziemski and Misuraca explore how the use of artificial intelligence in the public sector can exacerbate existing power imbalances between the public and the government. They consider the European Union’s artificial intelligence “governance and regulatory frameworks” and compare these policies with those of Canada, Finland, and Poland. Drawing on previous scholarship, the authors outline the goals, drivers, barriers, and risks of incorporating artificial intelligence into public services and assess existing regulations against these factors. Ultimately, they find that the “current AI policy debate is heavily skewed towards voluntary standards and self-governance” while minimizing the influence of power dynamics between governments and constituents. 

Misuraca, Gianluca, and Colin van Noordt. “AI Watch, Artificial Intelligence in Public Services: Overview of the Use and Impact of AI in Public Services in the EU.” 30255 (2020).

This study provides “evidence-based scientific support” for the European Commission as it navigates AI regulation via an overview of ways in which European Union member-states use AI to enhance their public sector operations. While AI has the potential to positively disrupt existing policies and functionalities, this report finds gaps in how AI gets applied by governments. It suggests the need for further research centered on the humanistic, ethical, and social ramification of AI use and a rigorous risk assessment from a “public-value perspective” when implementing AI technologies. Additionally, efforts must be made to empower all European countries to adopt responsible and coherent AI policies and techniques.

Saldanha, Douglas Morgan Fullin, and Marcela Barbosa da Silva. “Transparency and Accountability of Government Algorithms: The Case of the Brazilian Electronic Voting System.” Cadernos EBAPE.BR 18 (2020): 697–712.

Saldanha and da Silva note that open data and open government revolutions have increased citizen demand for algorithmic transparency. Algorithms are increasingly used by governments to speed up processes and reduce costs, but their black-box  systems and lack of explanability allows them to insert implicit and explicit bias and discrimination into their calculations. The authors conduct a qualitative study of the “practices and characteristics of the transparency and accountability” in the Brazilian e-voting system across seven dimensions: consciousness; access and reparations; accountability; explanation; data origin, privacy and justice; auditing; and validation, precision and tests. They find the Brazilian e-voting system fulfilled the need to inform citizens about the benefits and consequences of data collection and algorithm use but severely lacked in demonstrating accountability and opening algorithm processes for citizen oversight. They put forth policy recommendations to increase the e-voting system’s accountability to Brazilians and strengthen auditing and oversight processes to reduce the current distrust in the system.

Sharma, Gagan Deep, Anshita Yadav, and Ritika Chopra. “Artificial intelligence and effective governance: A review, critique and research agenda.Sustainable Futures 2 (2020): 100004.

This paper conducts a systematic review of the literature of how AI is used across different branches of government, specifically, healthcare, information, communication, and technology, environment, transportation, policy making, and economic sectors. Across the 74 papers surveyed, the authors find a gap in the research on selecting and implementing AI technologies, as well as their monitoring and evaluation. They call on future research to assess the impact of AI pre- and post-adoption in governance, along with the risks and challenges associated with the technology.

Tallerås, Kim, Terje Colbjørnsen, Knut Oterholm, and Håkon Larsen. “Cultural Policies, Social Missions, Algorithms and Discretion: What Should Public Service Institutions Recommend?Part of the Lecture Notes in Computer Science book series (2020).

Tallerås et al. examine how the use of algorithms by public services, such as public radio and libraries, influence broader society and culture. For instance, to modernize their offerings, Norway’s broadcasting corporation (NRK) has adopted online platforms similar to popular private streaming services. However, NRK’s filtering process has faced “exposure diversity” problems that narrow recommendations to already popular entertainment and move Norway’s cultural offerings towards a singularity. As a public institution, NRK is required to “fulfill […] some cultural policy goals,” raising the question of how public media services can remain relevant in the era of algorithms fed by “individualized digital culture.” Efforts are currently underway to employ recommendation systems that balance cultural diversity with personalized content relevance that engage individuals and uphold the socio-cultural mission of public media.

Vogl, Thomas, Seidelin Cathrine, Bharath Ganesh, and Jonathan Bright. “Smart Technology and the Emergence of Algorithmic Bureaucracy: Artificial Intelligence in UK Local Authorities.” Public administration review 80, no. 6 (2020): 946–961.

Local governments are using “smart technologies” to create more efficient and effective public service delivery. These tools are twofold: not only do they help the public interact with local authorities, they also streamline the tasks of government officials. To better understand the digitization of local government, the authors conducted surveys, desk research, and in-depth interviews with stakeholders from local British governments to understand reasoning, processes, and experiences within a changing government framework. Vogl et al. found an increase in “algorithmic bureaucracy” at the local level to reduce administrative tasks for government employees, generate feedback loops, and use data to enhance services. While the shift toward digital local government demonstrates initiatives to utilize emerging technology for public good, further research is required to determine which demographics are not involved in the design and implementation of smart technology services and how to identify and include these audiences.

Wirtz, Bernd W., Jan C. Weyerer, and Carolin Geyer. “Artificial intelligence and the public sector—Applications and challenges.International Journal of Public Administration 42, no. 7 (2019): 596-615.

The authors provide an extensive review of the existing literature on AI uses and challenges in the public sector to identify the gaps in current applications. The developing nature of AI in public service has led to differing definitions of what constitutes AI and what are the risks and benefits it poses to the public. As well, the authors note the lack of focus on the downfalls of AI in governance, with studies tending to primarily focus on the positive aspects of the technology. From this qualitative analysis, the researchers highlight ten AI applications: knowledge management, process automation, virtual agents, predictive analytics and data visualization, identity analytics, autonomous systems, recommendation systems, digital assistants, speech analytics, and threat intelligence. As well, they note four challenge dimensions—technology implementation, laws and regulation, ethics, and society. From these applications and risks, Wirtz et al. provide a “checklist for public managers” to make informed decisions on how to integrate AI into their operations. 

Wirtz, Bernd W., Jan C. Weyerer, and Benjamin J. Sturm. “The dark sides of artificial intelligence: An integrated AI governance framework for public administration.International Journal of Public Administration 43, no. 9 (2020): 818-829.

As AI is increasingly popularized and picked up by governments, Wirtz et al. highlight the lack of research on the challenges and risks—specifically, privacy and security—associated with implementing AI systems in the public sector. After assessing existing literature and uncovering gaps in the main governance frameworks, the authors outline the three areas of challenges of public AI: law and regulations, society, and ethics. Last, they propose an “integrated AI governance framework” that takes into account the risks of AI for a more holistic “big picture” approach to AI in the public sector.

Zuiderwijk, Anneke, Yu-Che Chen, and Fadi Salem. “Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda.Government Information Quarterly (2021): 101577.

Following a literature review on the risks and possibilities of AI in the public sector, Zuiderwijk, Chen, and Salem design a research agenda centered around the “implications of the use of AI for public governance.” The authors provide eight process recommendations, including: avoiding superficial buzzwords in research; conducting domain- and locality-specific research on AI in governance; shifting from qualitative analysis to diverse research methods; applying private sector “practice-driven research” to public sector study; furthering quantitative research on AI use by governments; creating “explanatory research designs”; sharing data for broader study; and adopting multidisciplinary reference theories. Further, they note the need for scholarship to delve into best practices, risk management, stakeholder communication, multisector use, and impact assessments of AI in the public sector to help decision-makers make informed decisions on the introduction, implementation, and oversight of AI in the public sector.

What Data About You Can the Government Get From Big Tech?


 Jack Nicas at the New York Times: “The Justice Department, starting in the early days of the Trump administration, secretly sought data from some of the biggest tech companies about journalistsDemocratic lawmakers and White House officials as part of wide-ranging investigations into leaks and other matters, The New York Times reported last week.

The revelations, which put the companies in the middle of a clash over the Trump administration’s efforts to find the sources of news coverage, raised questions about what sorts of data tech companies collect on their users, and how much of it is accessible to law enforcement authorities.

Here’s a rundown:

All sorts. Beyond basic data like users’ names, addresses and contact information, tech companies like Google, Apple, Microsoft and Facebook also often have access to the contents of their users’ emails, text messages, call logs, photos, videos, documents, contact lists and calendars.

Most of it is. But which data law enforcement can get depends on the sort of request they make.

Perhaps the most common and basic request is a subpoena. U.S. government agencies and prosecutors can often issue subpoenas without approval from a judge, and lawyers can issue them as part of open court cases. Subpoenas are often used to cast a wide net for basic information that can help build a case and provide evidence needed to issue more powerful requests….(More)”.

Digitalization as a common good. Contribution to an inclusive recovery


Essay by Julia Pomares, Andrés Ortega & María Belén Abdala: “…The pandemic has accelerated the urgency of a new social contract for this era at national, regional, and global levels, and such a pact clearly requires a digital dimension. The Spanish government, for example, proposes that by 2025, 100 megabits per second should be achieved for 100% of the population. A company like Telefónica, for its part, proposes a “Digital Deal to build back better our societies and economies” to achieve a “fair and inclusive digital transition,” both for Spain and Latin America.

The pandemic and the way of coping with and overcoming it has also emphasized and aggravated the significance of different types of digital and connectivity gaps and divides, between countries and regions of the world, between rural and urban areas, between social groups, including income and gender-related gaps, and between companies (large and small), which need to be addressed and bridged in these new social digital contracts. For the combination of digital divides and the pandemic amplify social disparities and inequalities in various spheres of life. Digitalization can contribute to enlarge those divides, but also to overcome them.

Common good

In 2016, the UN, through its Human Rights Council and General Assembly, qualified access to the internet as a basic fundamental human right, from which all human rights can also be defended. In 2021, the Italian Presidency of the G20 has set universal access to the internet as a goal of the group.

We use the concept of common good, in a non-legal but economic sense, following Nobel Laureate Elinor Ostrom 6 who refers to the nature of use and not of ownership. In line with Ostrom, digitalization and connectivity as a common good respond to three characteristics:

  • It is non-rivalrous: Its consumption by anyone does not reduce the amount available to others (which in digitalization and connectivity is true to a certain extent, since it also relies on huge but limited storage and processing centers, and also on network capacity, both in the access and backbone network. It is the definition of service, where a distinction has to be made between the content of what is transmitted, and the medium used.)
  • It is non-excludable: It is almost impossible to prevent anyone from consuming it.
  • It is available, more or less, all over the world….(More)”.

How Low and Middle-Income Countries Are Innovating to Combat Covid


Article by Ben Ramalingam, Benjamin Kumpf, Rahul Malhotra and Merrick Schaefer: “Since the Covid-19 pandemic hit, innovators around the world have developed thousands of novel solutions and practical approaches to this unprecedented global health challenge. About one-fifth of those innovations have come from low- and middle-income countries across sub-Saharan Africa, South Asia, and Latin America, according to our analysis of WHO data, and they work to address the needs of poor, marginalized, or excluded communities at the so-called bottom of the pyramid.

Over the past year we’ve been able to learn from and support some of those inspiring innovators. Their approaches are diverse in scope and scale and cover a vast range of pandemic response needs — from infection prevention and control to community engagement, contract tracing, social protection, business continuity, and more.

Here we share seven lessons from those innovators that offer promising insights not only for the ongoing Covid response but also for how we think about, manage, and enable innovation.

1. Ensure that your solutions are sensitive to social and cultural dynamics. 

Successful innovations are relevant to the lived realities of the people they’re intended to help. Socially and culturally sensitive design approaches see greater uptake and use. This is true in both resource-constrained and resource-rich environments.

Take contact tracing in Kenya. In a context where more than half of all residents use public transportation every day, the provider of a ticketing app for Nairobi’s bus fleets adapted its software to collect real-time passenger data. The app has been used across one of the world’s most mobile populations to trace Covid-19 cases, identify future clusters, trigger automated warnings to exposed passengers, and monitor the maximum number of people that could safely be allowed in each vehicle….(More)”.

Be Skeptical of Thought Leaders


Book Review by Evan Selinger: “Corporations regularly advertise their commitment to “ethics.” They often profess to behave better than the law requires and sometimes may even claim to make the world a better place. Google, for example, trumpets its commitment to “responsibly” developing artificial intelligence and swears it follows lofty AI principles that include being “socially beneficial” and “accountable to people,” and that “avoid creating or reinforcing unfair bias.”

Google’s recent treatment of Timnit Gebru, the former co-leader of its ethical AI team, tells another story. After Gebru went through an antagonistic internal review process for a co-authored paper that explores social and environmental risks and expressed concern over justice issues within Google, the company didn’t congratulate her for a job well done. Instead, she and vocally supportive colleague Margaret Mitchell (the other co-leader) were “forced out.” Google’s behavior “perhaps irreversibly damaged” the company’s reputation. It was hard not to conclude that corporate values misalign with the public good.

Even as tech companies continue to display hypocrisy, there might still be good reasons to have high hopes for their behavior in the future. Suppose corporations can do better than ethics washingvirtue signaling, and making incremental improvements that don’t challenge aggressive plans for financial growth. If so, society desperately needs to know what it takes to bring about dramatic change. On paper, Susan Liautaud is the right person to turn to for help. She has impressive academic credentials (a PhD in Social Policy from the London School of Economics and a JD from Columbia University Law School), founded and manages an ethics consulting firm with an international reach, and teaches ethics courses at Stanford University.

In The Power of Ethics: How to Make Good Choices in a Complicated World, Liautaud pursues a laudable goal: democratize the essential practical steps for making responsible decisions in a confusing and complex world. While the book is pleasantly accessible, it has glaring faults. With so much high-quality critical journalistic coverage of technologies and tech companies, we should expect more from long-form analysis.

Although ethics is more widely associated with dour finger-waving than aspirational world-building, Liautaud mostly crafts an upbeat and hopeful narrative, albeit not so cheerful that she denies the obvious pervasiveness of shortsighted mistakes and blatant misconduct. The problem is that she insists ethical values and technological development pair nicely. Big Tech might be exerting increasing control over our lives, exhibiting an oversized influence on public welfare through incursions into politics, education, social communication, space travel, national defense, policing, and currency — but this doesn’t in the least quell her enthusiasm, which remains elevated enough throughout her book to affirm the power of the people. Hyperbolically, she declares, “No matter where you stand […] you have the opportunity to prevent the monopolization of ethics by rogue actors, corporate giants, and even well-intentioned scientists and innovators.”…(More)“.