Ethics and governance of artificial intelligence for health


The WHO guidance…”on Ethics & Governance of Artificial Intelligence for Health is the product of eighteen months of deliberation amongst leading experts in ethics, digital technology, law, human rights, as well as experts from Ministries of Health.  While new technologies that use artificial intelligence hold great promise to improve diagnosis, treatment, health research and drug development and to support governments carrying out public health functions, including surveillance and outbreak response, such technologies, according to the report, must put ethics and human rights at the heart of its design, deployment, and use.

The report identifies the ethical challenges and risks with the use of artificial intelligence of health, six consensus principles to ensure AI works to the public benefit of all countries. It also contains a set of recommendations that can ensure the governance of artificial intelligence for health maximizes the promise of the technology and holds all stakeholders – in the public and private sector – accountable and responsive to the healthcare workers who will rely on these technologies and the communities and individuals whose health will be affected by its use…(More)”

National strategies on Artificial Intelligence: A European perspective


Report by European Commission’s Joint Research Centre (JRC) and the OECD’s Science Technology and Innovation Directorate: “Artificial intelligence (AI) is transforming the world in many aspects. It is essential for Europe to consider how to make the most of the opportunities from this transformation and to address its challenges. In 2018 the European Commission adopted the Coordinated Plan on Artificial Intelligence that was developed together with the Member States to maximise the impact of investments at European Union (EU) and national levels, and to encourage synergies and cooperation across the EU.

One of the key actions towards these aims was an encouragement for the Member States to develop their national AI strategies.The review of national strategies is one of the tasks of AI Watch launched by the European Commission to support the implementation of the Coordinated Plan on Artificial Intelligence.

Building on the 2020 AI Watch review of national strategies, this report presents an updated review of national AI strategies from the EU Member States, Norway and Switzerland. By June 2021, 20 Member States and Norway had published national AI strategies, while 7 Member States were in the final drafting phase. Since the 2020 release of the AI Watch report, additional Member States – i.e. Bulgaria, Hungary, Poland, Slovenia, and Spain – published strategies, while Cyprus, Finland and Germany have revised the initial strategies.

This report provides an overview of national AI policies according to the following policy areas: Human capital, From the lab to the market, Networking, Regulation, and Infrastructure. These policy areas are consistent with the actions proposed in the Coordinated Plan on Artificial Intelligence and with the policy recommendations to governments contained in the OECD Recommendation on AI. The report also includes a section on AI policies to address societal challenges of the COVID-19 pandemic and climate change….(More)”.

To regulate AI, try playing in a sandbox


Article by Dan McCarthy: “For an increasing number of regulators, researchers, and tech developers, the word “sandbox” is just as likely to evoke rulemaking and compliance as it is to conjure images of children digging, playing, and building. Which is kinda the point.

That’s thanks to the rise of regulatory sandboxes, which allow organizations to develop and test new technologies in a low-stakes, monitored environment before rolling them out to the general public. 

Supporters, from both the regulatory and the business sides, say sandboxes can strike the right balance of reining in potentially harmful technologies without kneecapping technological progress. They can also help regulators build technological competency and clarify how they’ll enforce laws that apply to tech. And while regulatory sandboxes originated in financial services, there’s growing interest in using them to police artificial intelligence—an urgent task as AI is expanding its reach while remaining largely unregulated. 

Even for all of its promise, experts told us, the approach should be viewed not as a silver bullet for AI regulation, but instead as a potential step in the right direction. 

Rashida Richardson, an AI researcher and visiting scholar at Rutgers Law School, is generally critical of AI regulatory sandboxes, but still said “it’s worth testing out ideas like this, because there is not going to be any universal model to AI regulation, and to figure out the right configuration of policy, you need to see theoretical ideas in practice.” 

But waiting for the theoretical to become concrete will take time. For example, in April, the European Union proposed AI regulation that would establish regulatory sandboxes to help the EU achieve its aim of responsible AI innovation, mentioning the word “sandbox” 38 times, compared to related terms like “impact assessment” (13 mentions) and “audit” (four). But it will likely take years for the EU’s proposal to become law. 

In the US, some well-known AI experts are working on an AI sandbox prototype, but regulators are not yet in the picture. However, the world’s first and (so far) only AI-specific regulatory sandbox did roll out in Norway this March, as a way to help companies comply with AI-specific provisions of the EU’s General Data Protection Regulation (GDPR). The project provides an early window into how the approach can work in practice.

“It’s a place for mutual learning—if you can learn earlier in the [product development] process, that is not only good for your compliance risk, but it’s really great for building a great product,” according to Erlend Andreas Gjære, CEO and cofounder of Secure Practice, an information security (“infosec”) startup that is one of four participants in Norway’s new AI regulatory sandbox….(More)”

How Does Artificial Intelligence Work?


BuiltIn: “Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: “Can machines think?” 

Turing’s paper “Computing Machinery and Intelligence” (1950), and its subsequent Turing Test, established the fundamental goal and vision of artificial intelligence.   

At its core, AI is the branch of computer science that aims to answer Turing’s question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines.

The expansive goal of artificial intelligence has given rise to many questions and debates. So much so, that no singular definition of the field is universally accepted.  

The major limitation in defining AI as simply “building machines that are intelligent” is that it doesn’t actually explain what artificial intelligence is? What makes a machine intelligent?

In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is “the study of agents that receive percepts from the environment and perform actions.” (Russel and Norvig viii)

Norvig and Russell go on to explore four different approaches that have historically defined the field of AI: 

  1. Thinking humanly
  2. Thinking rationally
  3. Acting humanly 
  4. Acting rationally

The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting “all the skills needed for the Turing Test also allow an agent to act rationally.” (Russel and Norvig 4).

Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as  “algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.”…(More)”.

Tasks, Automation, and the Rise in US Wage Inequality


Paper by Daron Acemoglu & Pascual Restrepo: “We document that between 50% and 70% of changes in the US wage structure over the last four decades are accounted for by the relative wage declines of worker groups specialized in routine tasks in industries experiencing rapid automation. We develop a conceptual framework where tasks across a number of industries are allocated to different types of labor and capital. Automation technologies expand the set of tasks performed by capital, displacing certain worker groups from employment opportunities for which they have comparative advantage. This framework yields a simple equation linking wage changes of a demographic group to the task displacement it experiences.

We report robust evidence in favor of this relationship and show that regression models incorporating task displacement explain much of the changes in education differentials between 1980 and 2016. Our task displacement variable captures the effects of automation technologies (and to a lesser degree offshoring) rather than those of rising market power, markups or deunionization, which themselves do not appear to play a major role in US wage inequality. We also propose a methodology for evaluating the full general equilibrium effects of task displacement (which include induced changes in industry composition and ripple effects as tasks are reallocated across different groups). Our quantitative evaluation based on this methodology explains how major changes in wage inequality can go hand-in-hand with modest productivity gains….(More)”.

NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems


Illustration shows how people evaluating two different tasks performed by AI -- music selection and medical diagnosis -- might trust the AI varying amounts because the risk level of each task is different.
NIST’s new publication proposes a list of nine factors that contribute to a human’s potential trust in an AI system. A person may weigh the nine factors differently depending on both the task itself and the risk involved in trusting the AI’s decision. As an example, two different AI programs — a music selection algorithm and an AI that assists with cancer diagnosis — may score the same on all nine criteria. Users, however, might be inclined to trust the music selection algorithm but not the medical assistant, which is performing a far riskier task.Credit: N. Hanacek/NIST

National Institute of Standards and Technology (NIST): ” Every time you speak to a virtual assistant on your smartphone, you are talking to an artificial intelligence — an AI that can, for example, learn your taste in music and make song recommendations that improve based on your interactions. However, AI also assists us with more risk-fraught activities, such as helping doctors diagnose cancer. These are two very different scenarios, but the same issue permeates both: How do we humans decide whether or not to trust a machine’s recommendations? 

This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems. The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. 

The report contributes to the broader NIST effort to help advance trustworthy AI systems. The focus of this latest publication is to understand how humans experience trust as they use or are affected by AI systems….(More)”.

Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade


Report by Pew Research Center: “Artificial intelligence systems “understand” and shape a lot of what happens in people’s lives. AI applications “speak” to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies. They help diagnose cancer and other medical conditions. They scour the use of credit cards for signs of fraud, and they determine who could be a credit risk.

They help people drive from point A to point B and update traffic information to shorten travel times. They are the operating system of driverless vehicles. They sift applications to make recommendations about job candidates. They determine the material that is offered up in people’s newsfeeds and video choices.

They recognize people’s facestranslate languages and suggest how to complete people’s sentences or search queries. They can “read” people’s emotions. They beat them at sophisticated games. They write news stories, paint in the style of Vincent Van Gogh and create music that sounds quite like the Beatles and Bach.

Corporations and governments are charging evermore expansively into AI development. Increasingly, nonprogrammers can set up off-the-shelf, pre-built AI tools as they prefer.

As this unfolds, a number of experts and advocates around the world have become worried about the long-term impact and implications of AI applications. They have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will. Dozens of convenings and study groups have issued papers proposing what the tenets of ethical AI design should be, and government working teams have tried to address these issues. In light of this, Pew Research Center and Elon University’s Imagining the Internet Center asked experts where they thought efforts aimed at creating ethical artificial intelligence would stand in the year 2030….(More)”

Citizens ‘on mute’ in digital public service delivery


Blog by Sarah Giest at Data and Policy: “Various countries are digitalizing their welfare system in the larger context of austerity considerations and fraud detection goals, but these changes are increasingly under scrutiny. In short, digitalization of the welfare system means that with the help of mathematical models, data and/or the combination of different administrative datasets, algorithms issue a decision on, for example, an application for social benefits (Dencik and Kaun 2020).

Several examples exist where such systems have led to unfair treatment of welfare recipients. In Europe, the Dutch SyRI system has been banned by court, due to human rights violations in the profiling of welfare recipients, and the UK has found errors in the automated processes leading to financial hardship among citizens. In the United States and Canada, automated systems led to false underpayment or denial of benefits. A recent UN report (2019) even warns that countries are ‘stumbling zombie-like into a digital welfare dystopia’. Further, studies raise alarm that this process of digitalization is done in a way that it not only creates excessive information asymmetry among government and citizens, but also disadvantages certain groups more than others.

A closer look at the Dutch Childcare Allowance case highlights this. In this example, low-income parents were regarded as fraudsters by the Tax Authorities if they had incorrectly filled out any documents. An automated and algorithm-based procedure then also singled out dual-nationality families. The victims lost their allowance without having been given any reasons. Even worse, benefits already received were reclaimed. This led to individual hardship, where financial troubles and the categorization as a fraudster by government led for citizens to a chain of events from unpaid healthcare insurance and the inability to visit a doctor to job loss, potential home loss and mental health concerns (Volkskrant 2020)….(More)”.

Selected Readings on the Use of Artificial Intelligence in the Public Sector


By Kateryna Gazaryan and Uma Kalkar

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works focuses on algorithms and artificial intelligence in the public sector.

As Artificial Intelligence becomes more developed, governments have turned to it to improve the speed and quality of public sector service delivery, among other objectives. Below, we provide a selection of recent literature that examines how the public sector has adopted AI to serve constituents and solve public problems. While the use of AI in governments can cut down costs and administrative work, these technologies are often early in development and difficult for organizations to understand and control with potential harmful effects as a result. As such, this selected reading explores not only the use of artificial intelligence in governance but also its benefits, and its consequences.

Readings are listed in alphabetical order.

Berryhill, Jamie, Kévin Kok Heang, Rob Clogher, and Keegan McBride. “Hello, World: Artificial intelligence and its use in the public sector.OECD Working Papers on Public Governance no. 36 (2019): https://doi.org/10.1787/726fd39d-en.

This working paper emphasizes the importance of defining AI for the public sector and outlining use cases of AI within governments. It provides a map of 50 countries that have implemented or set in motion the development of AI strategies and highlights where and how these initiatives are cross-cutting, innovative, and dynamic. Additionally, the piece provides policy recommendations governments should consider when exploring public AI strategies to adopt holistic and humanistic approaches.

Kuziemski, Maciej, and Gianluca Misuraca. “AI Governance in the Public Sector: Three Tales from the Frontiers of Automated Decision-Making in Democratic Settings.” Telecommunications Policy 44, no. 6 (2020): 101976. 

Kuziemski and Misuraca explore how the use of artificial intelligence in the public sector can exacerbate existing power imbalances between the public and the government. They consider the European Union’s artificial intelligence “governance and regulatory frameworks” and compare these policies with those of Canada, Finland, and Poland. Drawing on previous scholarship, the authors outline the goals, drivers, barriers, and risks of incorporating artificial intelligence into public services and assess existing regulations against these factors. Ultimately, they find that the “current AI policy debate is heavily skewed towards voluntary standards and self-governance” while minimizing the influence of power dynamics between governments and constituents. 

Misuraca, Gianluca, and Colin van Noordt. “AI Watch, Artificial Intelligence in Public Services: Overview of the Use and Impact of AI in Public Services in the EU.” 30255 (2020).

This study provides “evidence-based scientific support” for the European Commission as it navigates AI regulation via an overview of ways in which European Union member-states use AI to enhance their public sector operations. While AI has the potential to positively disrupt existing policies and functionalities, this report finds gaps in how AI gets applied by governments. It suggests the need for further research centered on the humanistic, ethical, and social ramification of AI use and a rigorous risk assessment from a “public-value perspective” when implementing AI technologies. Additionally, efforts must be made to empower all European countries to adopt responsible and coherent AI policies and techniques.

Saldanha, Douglas Morgan Fullin, and Marcela Barbosa da Silva. “Transparency and Accountability of Government Algorithms: The Case of the Brazilian Electronic Voting System.” Cadernos EBAPE.BR 18 (2020): 697–712.

Saldanha and da Silva note that open data and open government revolutions have increased citizen demand for algorithmic transparency. Algorithms are increasingly used by governments to speed up processes and reduce costs, but their black-box  systems and lack of explanability allows them to insert implicit and explicit bias and discrimination into their calculations. The authors conduct a qualitative study of the “practices and characteristics of the transparency and accountability” in the Brazilian e-voting system across seven dimensions: consciousness; access and reparations; accountability; explanation; data origin, privacy and justice; auditing; and validation, precision and tests. They find the Brazilian e-voting system fulfilled the need to inform citizens about the benefits and consequences of data collection and algorithm use but severely lacked in demonstrating accountability and opening algorithm processes for citizen oversight. They put forth policy recommendations to increase the e-voting system’s accountability to Brazilians and strengthen auditing and oversight processes to reduce the current distrust in the system.

Sharma, Gagan Deep, Anshita Yadav, and Ritika Chopra. “Artificial intelligence and effective governance: A review, critique and research agenda.Sustainable Futures 2 (2020): 100004.

This paper conducts a systematic review of the literature of how AI is used across different branches of government, specifically, healthcare, information, communication, and technology, environment, transportation, policy making, and economic sectors. Across the 74 papers surveyed, the authors find a gap in the research on selecting and implementing AI technologies, as well as their monitoring and evaluation. They call on future research to assess the impact of AI pre- and post-adoption in governance, along with the risks and challenges associated with the technology.

Tallerås, Kim, Terje Colbjørnsen, Knut Oterholm, and Håkon Larsen. “Cultural Policies, Social Missions, Algorithms and Discretion: What Should Public Service Institutions Recommend?Part of the Lecture Notes in Computer Science book series (2020).

Tallerås et al. examine how the use of algorithms by public services, such as public radio and libraries, influence broader society and culture. For instance, to modernize their offerings, Norway’s broadcasting corporation (NRK) has adopted online platforms similar to popular private streaming services. However, NRK’s filtering process has faced “exposure diversity” problems that narrow recommendations to already popular entertainment and move Norway’s cultural offerings towards a singularity. As a public institution, NRK is required to “fulfill […] some cultural policy goals,” raising the question of how public media services can remain relevant in the era of algorithms fed by “individualized digital culture.” Efforts are currently underway to employ recommendation systems that balance cultural diversity with personalized content relevance that engage individuals and uphold the socio-cultural mission of public media.

Vogl, Thomas, Seidelin Cathrine, Bharath Ganesh, and Jonathan Bright. “Smart Technology and the Emergence of Algorithmic Bureaucracy: Artificial Intelligence in UK Local Authorities.” Public administration review 80, no. 6 (2020): 946–961.

Local governments are using “smart technologies” to create more efficient and effective public service delivery. These tools are twofold: not only do they help the public interact with local authorities, they also streamline the tasks of government officials. To better understand the digitization of local government, the authors conducted surveys, desk research, and in-depth interviews with stakeholders from local British governments to understand reasoning, processes, and experiences within a changing government framework. Vogl et al. found an increase in “algorithmic bureaucracy” at the local level to reduce administrative tasks for government employees, generate feedback loops, and use data to enhance services. While the shift toward digital local government demonstrates initiatives to utilize emerging technology for public good, further research is required to determine which demographics are not involved in the design and implementation of smart technology services and how to identify and include these audiences.

Wirtz, Bernd W., Jan C. Weyerer, and Carolin Geyer. “Artificial intelligence and the public sector—Applications and challenges.International Journal of Public Administration 42, no. 7 (2019): 596-615.

The authors provide an extensive review of the existing literature on AI uses and challenges in the public sector to identify the gaps in current applications. The developing nature of AI in public service has led to differing definitions of what constitutes AI and what are the risks and benefits it poses to the public. As well, the authors note the lack of focus on the downfalls of AI in governance, with studies tending to primarily focus on the positive aspects of the technology. From this qualitative analysis, the researchers highlight ten AI applications: knowledge management, process automation, virtual agents, predictive analytics and data visualization, identity analytics, autonomous systems, recommendation systems, digital assistants, speech analytics, and threat intelligence. As well, they note four challenge dimensions—technology implementation, laws and regulation, ethics, and society. From these applications and risks, Wirtz et al. provide a “checklist for public managers” to make informed decisions on how to integrate AI into their operations. 

Wirtz, Bernd W., Jan C. Weyerer, and Benjamin J. Sturm. “The dark sides of artificial intelligence: An integrated AI governance framework for public administration.International Journal of Public Administration 43, no. 9 (2020): 818-829.

As AI is increasingly popularized and picked up by governments, Wirtz et al. highlight the lack of research on the challenges and risks—specifically, privacy and security—associated with implementing AI systems in the public sector. After assessing existing literature and uncovering gaps in the main governance frameworks, the authors outline the three areas of challenges of public AI: law and regulations, society, and ethics. Last, they propose an “integrated AI governance framework” that takes into account the risks of AI for a more holistic “big picture” approach to AI in the public sector.

Zuiderwijk, Anneke, Yu-Che Chen, and Fadi Salem. “Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda.Government Information Quarterly (2021): 101577.

Following a literature review on the risks and possibilities of AI in the public sector, Zuiderwijk, Chen, and Salem design a research agenda centered around the “implications of the use of AI for public governance.” The authors provide eight process recommendations, including: avoiding superficial buzzwords in research; conducting domain- and locality-specific research on AI in governance; shifting from qualitative analysis to diverse research methods; applying private sector “practice-driven research” to public sector study; furthering quantitative research on AI use by governments; creating “explanatory research designs”; sharing data for broader study; and adopting multidisciplinary reference theories. Further, they note the need for scholarship to delve into best practices, risk management, stakeholder communication, multisector use, and impact assessments of AI in the public sector to help decision-makers make informed decisions on the introduction, implementation, and oversight of AI in the public sector.

Introducing the AI Localism Repository


The GovLab: “Artificial intelligence is here to stay. As this technology advances—both in its complexity and ubiquity across our societies—decision-makers must address the growing nuances of AI regulation and oversight. Early last year, The GovLab’s Stefaan Verhulst and Mona Sloane coined the term “AI localism” to describe how local governments have stepped up to regulate AI policies, design governance frameworks, and monitor AI use in the public sector. 

While top-level regulation remains scant, many municipalities have taken to addressing AI use in their communities. Today, The GovLab is proud to announce the soft launch of the AI Localism Repository. This living platform is a curated collection of AI localism initiatives across the globe categorized by geographic regions, types of technological and governmental innovation in AI regulation, mechanisms of governance, and sector focus. 

We invite visitors to explore this repository and learn more about the inventive measures cities are taking to control how, when, and why AI is being used by public authorities. We also welcome additional case study submissions, which can be sent to us via Google Form….(More)”