Ethical Governance of Artificial Intelligence in the Public Sector


Book by Liza Ireni-Saban and Maya Sherman: “This book argues that ethical evaluation of AI should be an integral part of public service ethics and that an effective normative framework is needed to provide ethical principles and evaluation for decision-making in the public sphere, at both local and international levels.

It introduces how the tenets of prudential rationality ethics, through critical engagement with intersectionality, can contribute to a more successful negotiation of the challenges created by technological innovations in AI and afford a relational, interactive, flexible and fluid framework that meets the features of AI research projects, so that core public and individual values are still honoured in the face of technological development….(More)”.

Hundreds of AI tools have been built to catch covid. None of them helped.


Article by Will Douglas Heaven: “When covid-19 struck Europe in March 2020, hospitals were plunged into a health crisis that was still badly understood. “Doctors really didn’t have a clue how to manage these patients,” says Laure Wynants, an epidemiologist at Maastricht University in the Netherlands, who studies predictive tools.

But there was data coming out of China, which had a four-month head start in the race to beat the pandemic. If machine-learning algorithms could be trained on that data to help doctors understand what they were seeing and make decisions, it just might save lives. “I thought, ‘If there’s any time that AI could prove its usefulness, it’s now,’” says Wynants. “I had my hopes up.”

It never happened—but not for lack of effort. Research teams around the world stepped up to help. The AI community, in particular, rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines—in theory.

In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful.

That’s the damning conclusion of multiple studies published in the last few months. In June, the Turing Institute, the UK’s national center for data science and AI, put out a report summing up discussions at a series of workshops it held in late 2020. The clear consensus was that AI tools had made little, if any, impact in the fight against covid.

Not fit for clinical use

This echoes the results of two major studies that assessed hundreds of predictive tools developed last year. Wynants is lead author of one of them, a review in the British Medical Journal that is still being updated as new tools are released and existing ones tested. She and her colleagues have looked at 232 algorithms for diagnosing patients or predicting how sick those with the disease might get. They found that none of them were fit for clinical use. Just two have been singled out as being promising enough for future testing.

“It’s shocking,” says Wynants. “I went into it with some worries, but this exceeded my fears.”

Wynants’s study is backed up by another large review carried out by Derek Driggs, a machine-learning researcher at the University of Cambridge, and his colleagues, and published in Nature Machine Intelligence. This team zoomed in on deep-learning models for diagnosing covid and predicting patient risk from medical images, such as chest x-rays and chest computer tomography (CT) scans. They looked at 415 published tools and, like Wynants and her colleagues, concluded that none were fit for clinical use…..(More)”.

A comprehensive study of technological change


Article by Scott Murray: The societal impacts of technological change can be seen in many domains, from messenger RNA vaccines and automation to drones and climate change. The pace of that technological change can affect its impact, and how quickly a technology improves in performance can be an indicator of its future importance. For decision-makers like investors, entrepreneurs, and policymakers, predicting which technologies are fast improving (and which are overhyped) can mean the difference between success and failure.

New research from MIT aims to assist in the prediction of technology performance improvement using U.S. patents as a dataset. The study describes 97 percent of the U.S. patent system as a set of 1,757 discrete technology domains, and quantitatively assesses each domain for its improvement potential.

“The rate of improvement can only be empirically estimated when substantial performance measurements are made over long time periods,” says Anuraag Singh SM ’20, lead author of the paper. “In some large technological fields, including software and clinical medicine, such measures have rarely, if ever, been made.”

previous MIT study provided empirical measures for 30 technological domains, but the patent sets identified for those technologies cover less than 15 percent of the patents in the U.S. patent system. The major purpose of this new study is to provide predictions of the performance improvement rates for the thousands of domains not accessed by empirical measurement. To accomplish this, the researchers developed a method using a new probability-based algorithm, machine learning, natural language processing, and patent network analytics….(More)”.

Machine Learning and Mobile Phone Data Can Improve the Targeting of Humanitarian Assistance


Paper by Emily Aiken et al: “The COVID-19 pandemic has devastated many low- and middle-income countries (LMICs), causing widespread food insecurity and a sharp decline in living standards. In response to this crisis, governments and humanitarian organizations worldwide have mobilized targeted social assistance programs. Targeting is a central challenge in the administration of these programs: given available data, how does one rapidly identify the individuals and families with the greatest need? This challenge is particularly acute in the large number of LMICs that lack recent and comprehensive data on household income and wealth.

Here we show that non-traditional “big” data from satellites and mobile phone networks can improve the targeting of anti-poverty programs. Our approach uses traditional survey-based measures of consumption and wealth to train machine learning algorithms that recognize patterns of poverty in non-traditional data; the trained algorithms are then used to prioritize aid to the poorest regions and mobile subscribers. We evaluate this approach by studying Novissi, Togo’s flagship emergency cash transfer program, which used these algorithms to determine eligibility for a rural assistance program that disbursed millions of dollars in COVID-19 relief aid. Our analysis compares outcomes – including exclusion errors, total social welfare, and measures of fairness – under different targeting regimes. Relative to the geographic targeting options considered by the Government of Togo at the time, the machine learning approach reduces errors of exclusion by 4-21%. Relative to methods that require a comprehensive social registry (a hypothetical exercise; no such registry exists in Togo), the machine learning approach increases exclusion errors by 9-35%. These results highlight the potential for new data sources to contribute to humanitarian response efforts, particularly in crisis settings when traditional data are missing or out of date….(More)”.

Governing smart cities: policy benchmarks for ethical and responsible smart city development


Report by the World Economic Forum: “… provides a benchmark for cities looking to establish policies for ethical and responsible governance of their smart city programmes. It explores current practices relating to five foundational policies: ICT accessibility, privacy impact assessment, cyber accountability, digital infrastructure and open data. The findings are based on surveys and interviews with policy experts and city government officials from the Alliance’s 36 “Pioneer Cities”. The data and insights presented in the report come from an assessment of detailed policy elements rather than the high-level indicators often used in maturity frameworks….(More)”.

When Machines Can Be Judge, Jury, and Executioner


Book by Katherine B Forrest on “Justice in the Age of Artificial Intelligence”: “This book explores justice in the age of artificial intelligence. It argues that current AI tools used in connection with liberty decisions are based on utilitarian frameworks of justice and inconsistent with individual fairness reflected in the US Constitution and Declaration of Independence. It uses AI risk assessment tools and lethal autonomous weapons as examples of how AI influences liberty decisions. The algorithmic design of AI risk assessment tools can and does embed human biases. Designers and users of these AI tools have allowed some degree of compromise to exist between accuracy and individual fairness.

Written by a former federal judge who lectures widely and frequently on AI and the justice system, this book is the first comprehensive presentation of the theoretical framework of AI tools in the criminal justice system and lethal autonomous weapons utilized in decision-making. The book then provides a comprehensive explanation as to why, tracing the evolution of the debate regarding racial and other biases embedded in such tools. No other book delves as comprehensively into the theory and practice of AI risk assessment tools….(More)”.

Analytical modelling and UK Government policy


Paper by Marie Oldfield & Ella Haig:  “In the last decade, the UK Government has attempted to implement improved processes and procedures in modelling and analysis in response to the Laidlaw report of 2012 and the Macpherson review of 2013. The Laidlaw report was commissioned after failings during the Intercity West Coast Rail (ICWC) Franchise procurement exercise by the Department for Transport (DfT) that led to a legal challenge of the analytical models used within the exercise. The Macpherson review looked into the quality assurance of Government analytical models in the context of the experience with the Intercity West Coast franchise competition. This paper examines what progress has been made in the 8 years since the Laidlaw report in model building and best practise in government and proposes several recommendations for ways forward. This paper also discusses the Lords Science and Technology Committees of June 2020 that analysed the failings in the modelling of COVID. Despite going on to influence policy, many of the same issues raised within the Laidlaw and Macpherson Reports were also present in the Lords Science and Technology Committee enquiry. We examine the technical and organisational challenges to progress in this area and make recommendations for a way forward….(More)”.

Government algorithms are out of control and ruin lives



Nani Jansen Reventlow at Open Democracy: “Government services are increasingly being automated and technology is relied on more and more to make crucial decisions about our lives and livelihoods. This includes decisions about what type of support we can access in times of need: welfarebenefits, and other government services.

Technology has the potential to not only reproduce but amplify structural inequalities in our societies. If you combine this drive for automation with a broader context of criminalising poverty and systemic racism, this can have disastrous effects.

A recent example is the ‘child benefits scandal’ that brought down the Dutch government at the start of 2021. In the Netherlands, working parents are eligible for a government contribution toward the costs of daycare. This can run up to 90% of the actual costs for those with a low income. While contributions are often directly paid to childcare providers, parents are responsible for them. This means that, if the tax authorities determine that any allowance was wrongfully paid out, parents are liable for repaying them.

To detect cases of fraud, the Dutch tax authorities used a system that was outright discriminatory. An investigation by the Dutch Data Protection Authority last year showed that parents were singled out for special scrutiny because of their ethnic origin or dual nationality.  “The whole system was organised in a discriminatory manner and was also used as such,” it stated.

The fallout of these ‘fraud detection’ efforts was enormous. It is currently estimated that 46,000 parents were wrongly accused of having fraudulently claimed child care allowances. Families were forced to repay tens of thousands of euros, leading to financial hardship, loss of livelihood, homes, and in one case, even loss of life – one parent died by suicide. While we can still hope that justice for these families won’t be denied, it will certainly be delayed: this weekend, it became clear that it could take up to ten years to handle all claims. An unacceptable timeline, given how precarious the situation will be for many of those affected….(More)”.

Transparency’s AI Problem


Paper by Hannah Bloch-Wehba: “A consensus seems to be emerging that algorithmic governance is too opaque and ought to be made more accountable and transparent. But algorithmic governance underscores the limited capacity of transparency law—the Freedom of Information Act and its state equivalents—to promote accountability. Drawing on the critical literature on “open government,” this Essay shows that algorithmic governance reflects and amplifies systemic weaknesses in the transparency regime, including privatization, secrecy, private sector cooptation, and reactive disclosure. These deficiencies highlight the urgent need to reorient transparency and accountability law toward meaningful public engagement in ongoing oversight. This shift requires rethinking FOIA’s core commitment to public disclosure of agency records, exploring instead alternative ways to empower the public and to shed light on decisionmaking. The Essay argues that new approaches to transparency and accountability for algorithmic governance should be independent of private vendors, and ought to adequately represent the interests of affected individuals and communities. These considerations, of vital importance for the oversight of automated systems, also hold broader lessons for efforts to recraft open government obligations in the public interest….(More)”

Facial Recognition Technology: Federal Law Enforcement Agencies Should Better Assess Privacy and Other Risks


Report by the U.S. Government Accountability Office: “GAO surveyed 42 federal agencies that employ law enforcement officers about their use of facial recognition technology. Twenty reported owning systems with facial recognition technology or using systems owned by other entities, such as other federal, state, local, and non-government entities (see figure).

Ownership and Use of Facial Recognition Technology Reported by Federal Agencies that Employ Law Enforcement Officers

HLP_5 - 103705

Note: For more details, see figure 2 in GAO-21-518.

Agencies reported using the technology to support several activities (e.g., criminal investigations) and in response to COVID-19 (e.g., verify an individual’s identity remotely). Six agencies reported using the technology on images of the unrest, riots, or protests following the death of George Floyd in May 2020. Three agencies reported using it on images of the events at the U.S. Capitol on January 6, 2021. Agencies said the searches used images of suspected criminal activity.

All fourteen agencies that reported using the technology to support criminal investigations also reported using systems owned by non-federal entities. However, only one has awareness of what non-federal systems are used by employees. By having a mechanism to track what non-federal systems are used by employees and assessing related risks (e.g., privacy and accuracy-related risks), agencies can better mitigate risks to themselves and the public….GAO is making two recommendations to each of 13 federal agencies to implement a mechanism to track what non-federal systems are used by employees, and assess the risks of using these systems. Twelve agencies concurred with both recommendations. U.S. Postal Service concurred with one and partially concurred with the other. GAO continues to believe the recommendation is valid, as described in the report….(More)”.