Looking Under the Hood of AI’s Dubious Models


Essay by Ethan Edwards: “In 2018, McKinsey Global Institute released “Notes from the AI Frontier,” a report that seeks to predict the economic impact of artificial intelligence. Looming over the report is how the changing nature of work might transform society and pose challenges for policymakers. The good news is that the experts at McKinsey think that automation will create more jobs than it eliminates, but obviously it’s not a simple question. And the answer they give rests on sophisticated econometric models that include a variety of qualifications and estimates. Such models are necessarily simplified, and even reductionistic, but are they useful? And for whom?

Without a doubt, when it comes to predictive modeling, the center of the action in our society—and the industry through which intense entrepreneurial energy and venture capital flows—is artificial intelligence itself. AI, of course, is nothing new. A subdiscipline dedicated to mimicking human capacities in sensing, language, and thought, it’s nearly as old as computer science itself. But for the last ten years or so the promise and the hype of AI have only accelerated. The most impressive results have come from something called “neural nets,” which has used linear algebra to mimic some of the biological structures of our brain cells and has been combined with far better hardware developed for video games. In only a few years, neural nets have revolutionized image processing, language processing, audio analysis, and media recommendation. The hype is that they can do far more.

If we are—as many promoters assert—close to AIs that can do everything a human knowledge worker can and more, that is obviously a disruptive, even revolutionary, prospect. It’s also a claim that has turned on the spigot of investment capital. And that’s one reason it’s difficult to know the true potential of the industry. Talking about AI is a winning formula for startups, researchers, and anyone who wants funding, enough that the term AI gets used for more than just neural nets and is now a label for computer-based automation in general. Older methods that have nothing to do with the new boom have been rebranded under AI. Think tanks and universities are hosting seminars on the impact of AI on fields on which it has so far had no impact. Some startups who have built their company’s future profitability on the promise of their AI systems have actually had to hire low-wage humans to act like the hoped-for intelligences for customers and investors while they wait for the technology to catch up. Such hype produces a funhouse mirror effect that distorts the potential and therefore the value of firms and all but guarantees that some startups will squander valuable resources with broken (or empty) promises. But as long as some companies do keep their promises it’s gamble that many investors are still willing to take….(More)”.

Co-design and Ethical Artificial Intelligence for Health: Myths and Misconceptions


Paper by Joseph Donia and Jay Shaw: “Applications of artificial intelligence / machine learning (AI/ML) are dynamic and rapidly growing, and although multi-purpose, are particularly consequential in health care. One strategy for anticipating and addressing ethical challenges related to AI/ML for health care is co-design – or involvement of end users in design. Co-design has a diverse intellectual and practical history, however, and has been conceptualized in many different ways. Moreover, the unique features of AI/ML introduce challenges to co-design that are often underappreciated. This review summarizes the research literature on involvement in health care and design, and informed by critical data studies, examines the extent to which co-design as commonly conceptualized is capable of addressing the range of normative issues raised by AI/ML for health. We suggest that AI/ML technologies have amplified existing challenges related to co-design, and created entirely new challenges. We outline five co-design ‘myths and misconceptions’ related to AI/ML for health that form the basis for future research and practice. We conclude by suggesting that the normative strength of a co-design approach to AI/ML for health can be considered at three levels: technological, health care system, and societal. We also suggest research directions for a ‘new era’ of co-design capable of addressing these challenges….(More)”.

Philanthropy Can Help Communities Weed Out Inequity in Automated Decision Making Tools


Article by Chris Kingsley and Stephen Plank: “Two very different stories illustrate the impact of sophisticated decision-making tools on individuals and communities. In one, the Los Angeles Police Department publicly abandoned a program that used data to target violent offenders after residents in some neighborhoods were stopped by police as many as 30 times per week. In the other, New York City deployed data to root out landlords who discriminated against tenants using housing vouchers.

The second story shows the potential of automated data tools to promote social good — even as the first illustrates their potential for great harm.

Tools like these — typically described broadly as artificial intelligence or somewhat more narrowly as predictive analytics, which incorporates more human decision making in the data collection process — increasingly influence and automate decisions that affect people’s lives. This includes which families are investigated by child protective services, where police deploy, whether loan officers extend credit, and which job applications a hiring manager receives.

How these tools are built, used, and governed will help shape the opportunities of everyday citizens, for good or ill.

Civil-rights advocates are right to worry about the harm such technology can do by hardpwiring bias into decision making. At the Annie E. Casey Foundation, where we fund and support data-focused efforts, we consulted with civil-rights groups, data scientists, government leaders, and family advocates to learn more about what needs to be done to weed out bias and inequities in automated decision-making tools — and recently produced a report about how to harness their potential to promote equity and social good.

Foundations and nonprofit organizations can play vital roles in ensuring equitable use of A.I. and other data technology. Here are four areas in which philanthropy can make a difference:

Support the development and use of transparent data tools. The public has a right to know how A.I. is being used to influence policy decisions, including whether those tools were independently validated and who is responsible for addressing concerns about how they work. Grant makers should avoid supporting private algorithms whose design and performance are shielded by trade-secrecy claims. Despite calls from advocates, some companies have declined to disclose details that would allow the public to assess their fairness….(More)”

The Society of Algorithms


Paper by Jenna Burrell and Marion Fourcade: “The pairing of massive data sets with processes—or algorithms—written in computer code to sort through, organize, extract, or mine them has made inroads in almost every major social institution. This article proposes a reading of the scholarly literature concerned with the social implications of this transformation. First, we discuss the rise of a new occupational class, which we call the coding elite. This group has consolidated power through their technical control over the digital means of production and by extracting labor from a newly marginalized or unpaid workforce, the cybertariat. Second, we show that the implementation of techniques of mathematical optimization across domains as varied as education, medicine, credit and finance, and criminal justice has intensified the dominance of actuarial logics of decision-making, potentially transforming pathways to social reproduction and mobility but also generating a pushback by those so governed. Third, we explore how the same pervasive algorithmic intermediation in digital communication is transforming the way people interact, associate, and think. We conclude by cautioning against the wildest promises of artificial intelligence but acknowledging the increasingly tight coupling between algorithmic processes, social structures, and subjectivities….(More)”.

Ethical Governance of Artificial Intelligence in the Public Sector


Book by Liza Ireni-Saban and Maya Sherman: “This book argues that ethical evaluation of AI should be an integral part of public service ethics and that an effective normative framework is needed to provide ethical principles and evaluation for decision-making in the public sphere, at both local and international levels.

It introduces how the tenets of prudential rationality ethics, through critical engagement with intersectionality, can contribute to a more successful negotiation of the challenges created by technological innovations in AI and afford a relational, interactive, flexible and fluid framework that meets the features of AI research projects, so that core public and individual values are still honoured in the face of technological development….(More)”.

Hundreds of AI tools have been built to catch covid. None of them helped.


Article by Will Douglas Heaven: “When covid-19 struck Europe in March 2020, hospitals were plunged into a health crisis that was still badly understood. “Doctors really didn’t have a clue how to manage these patients,” says Laure Wynants, an epidemiologist at Maastricht University in the Netherlands, who studies predictive tools.

But there was data coming out of China, which had a four-month head start in the race to beat the pandemic. If machine-learning algorithms could be trained on that data to help doctors understand what they were seeing and make decisions, it just might save lives. “I thought, ‘If there’s any time that AI could prove its usefulness, it’s now,’” says Wynants. “I had my hopes up.”

It never happened—but not for lack of effort. Research teams around the world stepped up to help. The AI community, in particular, rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines—in theory.

In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful.

That’s the damning conclusion of multiple studies published in the last few months. In June, the Turing Institute, the UK’s national center for data science and AI, put out a report summing up discussions at a series of workshops it held in late 2020. The clear consensus was that AI tools had made little, if any, impact in the fight against covid.

Not fit for clinical use

This echoes the results of two major studies that assessed hundreds of predictive tools developed last year. Wynants is lead author of one of them, a review in the British Medical Journal that is still being updated as new tools are released and existing ones tested. She and her colleagues have looked at 232 algorithms for diagnosing patients or predicting how sick those with the disease might get. They found that none of them were fit for clinical use. Just two have been singled out as being promising enough for future testing.

“It’s shocking,” says Wynants. “I went into it with some worries, but this exceeded my fears.”

Wynants’s study is backed up by another large review carried out by Derek Driggs, a machine-learning researcher at the University of Cambridge, and his colleagues, and published in Nature Machine Intelligence. This team zoomed in on deep-learning models for diagnosing covid and predicting patient risk from medical images, such as chest x-rays and chest computer tomography (CT) scans. They looked at 415 published tools and, like Wynants and her colleagues, concluded that none were fit for clinical use…..(More)”.

A comprehensive study of technological change


Article by Scott Murray: The societal impacts of technological change can be seen in many domains, from messenger RNA vaccines and automation to drones and climate change. The pace of that technological change can affect its impact, and how quickly a technology improves in performance can be an indicator of its future importance. For decision-makers like investors, entrepreneurs, and policymakers, predicting which technologies are fast improving (and which are overhyped) can mean the difference between success and failure.

New research from MIT aims to assist in the prediction of technology performance improvement using U.S. patents as a dataset. The study describes 97 percent of the U.S. patent system as a set of 1,757 discrete technology domains, and quantitatively assesses each domain for its improvement potential.

“The rate of improvement can only be empirically estimated when substantial performance measurements are made over long time periods,” says Anuraag Singh SM ’20, lead author of the paper. “In some large technological fields, including software and clinical medicine, such measures have rarely, if ever, been made.”

previous MIT study provided empirical measures for 30 technological domains, but the patent sets identified for those technologies cover less than 15 percent of the patents in the U.S. patent system. The major purpose of this new study is to provide predictions of the performance improvement rates for the thousands of domains not accessed by empirical measurement. To accomplish this, the researchers developed a method using a new probability-based algorithm, machine learning, natural language processing, and patent network analytics….(More)”.

Machine Learning and Mobile Phone Data Can Improve the Targeting of Humanitarian Assistance


Paper by Emily Aiken et al: “The COVID-19 pandemic has devastated many low- and middle-income countries (LMICs), causing widespread food insecurity and a sharp decline in living standards. In response to this crisis, governments and humanitarian organizations worldwide have mobilized targeted social assistance programs. Targeting is a central challenge in the administration of these programs: given available data, how does one rapidly identify the individuals and families with the greatest need? This challenge is particularly acute in the large number of LMICs that lack recent and comprehensive data on household income and wealth.

Here we show that non-traditional “big” data from satellites and mobile phone networks can improve the targeting of anti-poverty programs. Our approach uses traditional survey-based measures of consumption and wealth to train machine learning algorithms that recognize patterns of poverty in non-traditional data; the trained algorithms are then used to prioritize aid to the poorest regions and mobile subscribers. We evaluate this approach by studying Novissi, Togo’s flagship emergency cash transfer program, which used these algorithms to determine eligibility for a rural assistance program that disbursed millions of dollars in COVID-19 relief aid. Our analysis compares outcomes – including exclusion errors, total social welfare, and measures of fairness – under different targeting regimes. Relative to the geographic targeting options considered by the Government of Togo at the time, the machine learning approach reduces errors of exclusion by 4-21%. Relative to methods that require a comprehensive social registry (a hypothetical exercise; no such registry exists in Togo), the machine learning approach increases exclusion errors by 9-35%. These results highlight the potential for new data sources to contribute to humanitarian response efforts, particularly in crisis settings when traditional data are missing or out of date….(More)”.

Governing smart cities: policy benchmarks for ethical and responsible smart city development


Report by the World Economic Forum: “… provides a benchmark for cities looking to establish policies for ethical and responsible governance of their smart city programmes. It explores current practices relating to five foundational policies: ICT accessibility, privacy impact assessment, cyber accountability, digital infrastructure and open data. The findings are based on surveys and interviews with policy experts and city government officials from the Alliance’s 36 “Pioneer Cities”. The data and insights presented in the report come from an assessment of detailed policy elements rather than the high-level indicators often used in maturity frameworks….(More)”.

When Machines Can Be Judge, Jury, and Executioner


Book by Katherine B Forrest on “Justice in the Age of Artificial Intelligence”: “This book explores justice in the age of artificial intelligence. It argues that current AI tools used in connection with liberty decisions are based on utilitarian frameworks of justice and inconsistent with individual fairness reflected in the US Constitution and Declaration of Independence. It uses AI risk assessment tools and lethal autonomous weapons as examples of how AI influences liberty decisions. The algorithmic design of AI risk assessment tools can and does embed human biases. Designers and users of these AI tools have allowed some degree of compromise to exist between accuracy and individual fairness.

Written by a former federal judge who lectures widely and frequently on AI and the justice system, this book is the first comprehensive presentation of the theoretical framework of AI tools in the criminal justice system and lethal autonomous weapons utilized in decision-making. The book then provides a comprehensive explanation as to why, tracing the evolution of the debate regarding racial and other biases embedded in such tools. No other book delves as comprehensively into the theory and practice of AI risk assessment tools….(More)”.