Could AI Drive Transformative Social Progress? What Would This Require?


Paper by Edward (Ted) A. Parson et al: “In contrast to popular dystopian speculation about the societal impacts of widespread AI deployment, we consider AI’s potential to drive a social transformation toward greater human liberty, agency, and equality. The impact of AI, like all technology, will depend on both properties of the technology and the economic, social, and political conditions of its deployment and use. We identify conditions of each type – technical characteristics and socio-political context – likely to be conducive to such large-scale beneficial impacts.

Promising technical characteristics include decision-making structures that are tentative and pluralistic, rather than optimizing a single-valued objective function under a single characterization of world conditions; and configuring the decision-making of AI-enabled products and services exclusively to advance the interests of their users, subject to relevant social values, not those of their developers or vendors. We explore various strategies and business models for developing and deploying AI-enabled products that incorporate these characteristics, including philanthropic seed capital, crowd-sourcing, open-source development, and sketch various possible ways to scale deployment thereafter….(More)”.

Overbooked and Overlooked: Machine Learning and Racial Bias in Medical Appointment Scheduling


Paper by Michele Samorani et al: “Machine learning is often employed in appointment scheduling to identify the patients with the greatest no-show risk, so as to schedule them into overbooked slots, and thereby maximize the clinic performance, as measured by a weighted sum of all patients’ waiting time and the provider’s overtime and idle time. However, if the patients with the greatest no-show risk belong to the same demographic group, then that demographic group will be scheduled in overbooked slots disproportionately to the general population. This is problematic because patients scheduled in those slots tend to have a worse service experience than the other patients, as measured by the time they spend in the waiting room. Such negative experience may decrease patient’s engagement and, in turn, further increase no-shows. Motivated by the real-world case of a large specialty clinic whose black patients have a higher no-show probability than non-black patients, we demonstrate that combining machine learning with scheduling optimization causes racial disparity in terms of patient waiting time. Our solution to eliminate this disparity while maintaining the benefits derived from machine learning consists of explicitly including the objective of minimizing racial disparity. We validate our solution method both on simulated data and real-world data, and find that racial disparity can be completely eliminated with no significant increase in scheduling cost when compared to the traditional predictive overbooking framework….(More)”.

Artificial Discretion as a Tool of Governance: A Framework for Understanding the Impact of Artificial Intelligence on Public Administration


Paper by Matthew M Young, Justin B Bullock, and Jesse D Lecy in Perspectives on Public Management and Governance: “Public administration research has documented a shift in the locus of discretion away from street-level bureaucrats to “systems-level bureaucracies” as a result of new information communication technologies that automate bureaucratic processes, and thus shape access to resources and decisions around enforcement and punishment. Advances in artificial intelligence (AI) are accelerating these trends, potentially altering discretion in public management in exciting and in challenging ways. We introduce the concept of “artificial discretion” as a theoretical framework to help public managers consider the impact of AI as they face decisions about whether and how to implement it. We operationalize discretion as the execution of tasks that require nontrivial decisions. Using Salamon’s tools of governance framework, we compare artificial discretion to human discretion as task specificity and environmental complexity vary. We evaluate artificial discretion with the criteria of effectiveness, efficiency, equity, manageability, and political feasibility. Our analysis suggests three principal ways that artificial discretion can improve administrative discretion at the task level: (1) increasing scalability, (2) decreasing cost, and (3) improving quality. At the same time, artificial discretion raises serious concerns with respect to equity, manageability, and political feasibility….(More)”.

Dissecting racial bias in an algorithm used to manage the health of populations


Paper by Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan in Science: “Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts….(More)”.

New Zealand launches draft algorithm charter for government agencies


Mia Hunt at Global Government Forum: “The New Zealand government has launched a draft ‘algorithm charter’ that sets out how agencies should analyse data in a way that is fair, ethical and transparent.

The charter, which is open for public consultation, sets out 10 points that agencies would have to adhere to. These include pledging to explain how significant decisions are informed by algorithms or, where it cannot – for national security reasons, for example – explain the reason; taking into account the perspectives of communities, such as LGBTQI+, Pacific islanders and people with disabilities; and identifying and consulting with groups or stakeholders with an interest in algorithm development.

Agencies would also have to publish information about how data is collected and stored; use tools and processes to ensure that privacy, ethics, and human rights considerations are integrated as part of algorithm development and procurement; and periodically assess decisions made by algorithms for unintended bias.

They would commit to implementing a “robust” peer-review process, and have to explain clearly who is responsible for automated decisions and what methods exist for challenge or appeal “via a human”….

The charter – which fits on a single page, and is designed to be simple and easily understood – explains that algorithms are a “fundamental element” of data analytics, which supports public services and delivers “new, innovative and well-targeted” policies aims.

The charter begins: “In a world where technology is moving rapidly, and artificial intelligence is on the rise, it’s essential that government has the right safeguards in place when it uses public data for decision-making. The government must ensure that data ethics are embedded in its work, and always keep in mind the people and communities being served by these tools.”

It says Stats NZ, the country’s official data agency, is “committed to transparent and accountable use of operational algorithms and other advanced data analytics techniques that inform decisions significantly impacting on individuals or groups”….(More)”.

The Economics of Artificial Intelligence


Book edited by Ajay Agrawal, Joshua Gans and Avi Goldfarb: “Advances in artificial intelligence (AI) highlight the potential of this technology to affect productivity, growth, inequality, market power, innovation, and employment. This volume seeks to set the agenda for economic research on the impact of AI.

It covers four broad themes: AI as a general purpose technology; the relationships between AI, growth, jobs, and inequality; regulatory responses to changes brought on by AI; and the effects of AI on the way economic research is conducted. It explores the economic influence of machine learning, the branch of computational statistics that has driven much of the recent excitement around AI, as well as the economic impact of robotics and automation and the potential economic consequences of a still-hypothetical artificial general intelligence. The volume provides frameworks for understanding the economic impact of AI and identifies a number of open research questions…. (More)”

Ethical guidelines issued by engineers’ organization fail to gain traction


Blogpost by Nicolas Kayser-Bril: “In early 2016, the Institute of Electrical and Electronics Engineers, a professional association known as IEEE, launched a “global initiative to advance ethics in technology.” After almost three years of work and multiple rounds of exchange with experts on the topic, it released last April the first edition of Ethically Aligned Design, a 300-page treatise on the ethics of automated systems.

The general principles issued in the report focus on transparency, human rights and accountability, among other topics. As such, they are not very different from the 83 other ethical guidelines that researchers from the Health Ethics and Policy Lab of the Swiss Federal Institute of Technology in Zurich reviewed in an article published in Nature Machine Intelligence in September. However, one key aspect makes IEEE different from other think-tanks. With over 420,000 members, it is the world’s largest engineers’ association with roots reaching deep into Silicon Valley. Vint Cerf, one of Google’s Vice Presidents, is an IEEE “life fellow.”

Because the purpose of the IEEE principles is to serve as a “key reference for the work of technologists”, and because many technologists contributed to their conception, we wanted to know how three technology companies, Facebook, Google and Twitter, were planning to implement them.

Transparency and accountability

Principle number 5, for instance, requires that the basis of a particular automated decision be “discoverable”. On Facebook and Instagram, the reasons why a particular item is shown on a user’s feed are all but discoverable. Facebook’s “Why You’re Seeing This Post” feature explains that “many factors” are involved in the decision to show a specific item. The help page designed to clarify the matter fails to do so: many sentences there use opaque wording (users are told that “some things influence ranking”, for instance) and the basis of the decisions governing their newsfeeds are impossible to find.

Principle number 6 states that any autonomous system shall “provide an unambiguous rationale for all decisions made.” Google’s advertising systems do not provide an unambiguous rationale when explaining why a particular advert was shown to a user. A click on “Why This Ad” states that an “ad may be based on general factors … [and] information collected by the publisher” (our emphasis). Such vagueness is antithetical to the requirement for explicitness.

AlgorithmWatch sent detailed letters (which you can read below this article) with these examples and more, asking Google, Facebook and Twitter how they planned to implement the IEEE guidelines. This was in June. After a great many emails, phone calls and personal meetings, only Twitter answered. Google gave a vague comment and Facebook promised an answer which never came…(More)”

Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations


Paper by Margot E. Kaminski and Gianclaudio Malgieri: “Policy-makers, scholars, and commentators are increasingly concerned with the risks of using profiling algorithms and automated decision-making. The EU’s General Data Protection Regulation (GDPR) has tried to address these concerns through an array of regulatory tools. As one of us has argued, the GDPR combines individual rights with systemic governance, towards algorithmic accountability. The individual tools are largely geared towards individual “legibility”: making the decision-making system understandable to an individual invoking her rights. The systemic governance tools, instead, focus on bringing expertise and oversight into the system as a whole, and rely on the tactics of “collaborative governance,” that is, use public-private partnerships towards these goals. How these two approaches to transparency and accountability interact remains a largely unexplored question, with much of the legal literature focusing instead on whether there is an individual right to explanation.

The GDPR contains an array of systemic accountability tools. Of these tools, impact assessments (Art. 35) have recently received particular attention on both sides of the Atlantic, as a means of implementing algorithmic accountability at early stages of design, development, and training. The aim of this paper is to address how a Data Protection Impact Assessment (DPIA) links the two faces of the GDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance. We address the relationship between DPIAs and individual transparency rights. We propose, too, that impact assessments link the GDPR’s two methods of governing algorithmic decision-making by both providing systemic governance and serving as an important “suitable safeguard” (Art. 22) of individual rights….(More)”.

A fairer way forward for AI in health care


Linda Nordling at Nature: “When data scientists in Chicago, Illinois, set out to test whether a machine-learning algorithm could predict how long people would stay in hospital, they thought that they were doing everyone a favour. Keeping people in hospital is expensive, and if managers knew which patients were most likely to be eligible for discharge, they could move them to the top of doctors’ priority lists to avoid unnecessary delays. It would be a win–win situation: the hospital would save money and people could leave as soon as possible.

Starting their work at the end of 2017, the scientists trained their algorithm on patient data from the University of Chicago academic hospital system. Taking data from the previous three years, they crunched the numbers to see what combination of factors best predicted length of stay. At first they only looked at clinical data. But when they expanded their analysis to other patient information, they discovered that one of the best predictors for length of stay was the person’s postal code. This was puzzling. What did the duration of a person’s stay in hospital have to do with where they lived?

As the researchers dug deeper, they became increasingly concerned. The postal codes that correlated to longer hospital stays were in poor and predominantly African American neighbourhoods. People from these areas stayed in hospitals longer than did those from more affluent, predominantly white areas. The reason for this disparity evaded the team. Perhaps people from the poorer areas were admitted with more severe conditions. Or perhaps they were less likely to be prescribed the drugs they needed.

The finding threw up an ethical conundrum. If optimizing hospital resources was the sole aim of their programme, people’s postal codes would clearly be a powerful predictor for length of hospital stay. But using them would, in practice, divert hospital resources away from poor, black people towards wealthy white people, exacerbating existing biases in the system.

“The initial goal was efficiency, which in isolation is a worthy goal,” says Marshall Chin, who studies health-care ethics at University of Chicago Medicine and was one of the scientists who worked on the project. But fairness is also important, he says, and this was not explicitly considered in the algorithm’s design….(More)”.

The Algorithmic Divide and Equality in the Age of Artificial Intelligence


Paper by Peter Yu: “In the age of artificial intelligence, highly sophisticated algorithms have been deployed to detect patterns, optimize solutions, facilitate self-learning, and foster improvements in technological products and services. Notwithstanding these tremendous benefits, algorithms and intelligent machines do not provide equal benefits to all. Just as the digital divide has separated those with access to the Internet, information technology, and digital content from those without, an emerging and ever-widening algorithmic divide now threatens to take away the many political, social, economic, cultural, educational, and career opportunities provided by machine learning and artificial intelligence.

Although policymakers, commentators, and the mass media have paid growing attention to algorithmic bias and the shortcomings of machine learning and artificial intelligence, the algorithmic divide has yet to attract much policy and scholarly attention. To fill this lacuna, this article draws on the digital divide literature to systematically analyze this new inequitable gap between the technology haves and have-nots. Utilizing the analytical framework that the Author developed in the early 2000s, the article discusses the five attributes of the algorithmic divide: awareness, access, affordability, availability, and adaptability.

This article then turns to three major problems precipitated by an emerging and fast-expanding algorithmic divide: (1) algorithmic deprivation; (2) algorithmic discrimination; and (3) algorithmic distortion. While the first two problems affect primarily those on the unfortunate side of the algorithmic divide, the latter impacts individuals on both sides of the divide. This article concludes by proposing seven nonexhaustive clusters of remedial actions to help bridge this emerging and ever-widening algorithmic divide. Combining law, communications policy, ethical principles, institutional mechanisms, and business practices, the article fashions a holistic response to help foster equality in the age of artificial intelligence….(More)”.