Book by Ryan Abbott: “AI and people do not compete on a level-playing field. Self-driving vehicles may be safer than human drivers, but laws often penalize such technology. People may provide superior customer service, but businesses are automating to reduce their taxes. AI may innovate more effectively, but an antiquated legal framework constrains inventive AI. In The Reasonable Robot, Ryan Abbott argues that the law should not discriminate between AI and human behavior and proposes a new legal principle that will ultimately improve human well-being. This work should be read by anyone interested in the rapidly evolving relationship between AI and the law….(More)”.
Challenging the Use of Algorithm-driven Decision-making in Benefits Determinations Affecting People with Disabilities
Paper by Lydia X. Z. Brown, Michelle Richardson, Ridhi Shetty, and Andrew Crawford: “Governments are increasingly turning to algorithms to determine whether and to what extent people should receive crucial benefits for programs like Medicaid, Medicare, unemployment, and Social Security Disability. Billed as a way to increase efficiency and root out fraud, these algorithm-driven decision-making tools are often implemented without much public debate and are incredibly difficult to understand once underway. Reports from people on the ground confirm that the tools are frequently reducing and denying benefits, often with unfair and inhumane results.
Benefits recipients are challenging these tools in court, arguing that flaws in the programs’ design or execution violate their due process rights, among other claims. These cases are some of the few active courtroom challenges to algorithm-driven decision-making, producing important precedent about people’s right to notice, explanation, and other procedural due process safeguards when algorithm-driven decisions are made about them. As the legal and policy world continues to recognize the outsized impact of algorithm-driven decision-making in various aspects of our lives, public benefits cases provide important insights into how such tools can operate; the risks of errors in design and execution; and the devastating human toll when tools are adopted without effective notice, input, oversight, and accountability.
This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court. It draws on direct interviews with attorneys who have litigated these cases and plaintiffs who sought to vindicate their rights in court – in some instances suing not only for themselves, but on behalf of similarly situated people. The attorneys work in legal aid offices, civil rights litigation shops, law school clinics, and disability protection and advocacy offices. The cases cover a range of benefits issues and have netted mixed results.
People with disabilities experience disproportionate and particular harm because of unjust algorithm-driven decision-making, and we have attempted to center disabled people’s stories and cases in this paper. As disabled people fight for rights inside and outside the courtroom on a wide range of issues, we focus on litigation and highlight the major legal theories for challenging improper algorithm-driven benefit denials in the U.S.
The good news is that in some cases, plaintiffs are successfully challenging improper adverse benefits decisions with Constitutional, statutory, and administrative claims. But like other forms of civil rights and impact litigation, the bad news is that relief can be temporary and is almost always delayed. Litigation must therefore work in tandem with the development of new processes driven by people who require access to public assistance and whose needs are centered in these processes. We hope this contribution informs not only the development of effective litigation, but a broader public conversation about the thoughtful design, use, and oversight of algorithm-driven decision-making systems….(More)”.
Artificial intelligence, transparency, and public decision-making
Paper by Karl de Fine Licht & Jenny de Fine Licht: “The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring….(More)”.
Automating Society Report 2020
Bertelsmann Stiftung: “When launching the first edition of this report, we decided to call it “Automating Society”, as ADM systems in Europe were mostly new, experimental, and unmapped – and, above all, the exception rather than the norm.
This situation has changed rapidly. As clearly shown by over 100 use cases of automated decision-making systems in 16 European countries, which have been compiled by a research network for the 2020 edition of the Automating Society report by Bertelsmann Stiftung and AlgorithmWatch. The report shows: Even though algorithmic systems are increasingly being used by public administration and private companies, there is still a lack of transparency, oversight and competence.
The stubborn opacity surrounding the ever-increasing use of ADM systems has made it all the more urgent that we continue to increase our efforts. Therefore, we have added four countries (Estonia, Greece, Portugal, and Switzerland) to the 12 we already analyzed in the previous edition of this report, bringing the total to 16 countries. While far from exhaustive, this allows us to provide a broader picture of the ADM scenario in Europe. Considering the impact these systems may have on everyday life, and how profoundly they challenge our intuitions – if not our norms and rules – about the relationship between democratic governance and automation, we believe this is an essential endeavor….(More)”.
Algorithm Tips
About: “Algorithm Tips is here to help you start investigating algorithmic decision-making power in society.
This site offers a database of leads which you can search and filter. It’s a curated set of algorithms being used across the US government at the federal, state, and local levels. You can subscribe to alerts for when new algorithms matching your interests are found. For details on our curation methodology see here.
We also provide resources such as example investigations, methodological tips, and guidelines for public records requests related to algorithms.
Finally, we blog about some of the more interesting examples of algorithms we’ve uncovered in our research….(More)”.
Understanding Bias in Facial Recognition Technologies
Paper by David Leslie: “Over the past couple of years, the growing debate around automated facial recognition has reached a boiling point. As developers have continued to swiftly expand the scope of these kinds of technologies into an almost unbounded range of applications, an increasingly strident chorus of critical voices has sounded concerns about the injurious effects of the proliferation of such systems on impacted individuals and communities.
Opponents argue that the irresponsible design and use of facial detection and recognition technologies (FDRTs) threatens to violate civil liberties, infringe on basic human rights and further entrench structural racism and systemic marginalisation. They also caution that the gradual creep of face surveillance infrastructures into every domain of lived experience may eventually eradicate the modern democratic forms of life that have long provided cherished means to individual flourishing, social solidarity and human self-creation. Defenders, by contrast, emphasise the gains in public safety, security and efficiency that digitally streamlined capacities for facial identification, identity verification and trait characterisation may bring.
In this explainer, I focus on one central aspect of this debate: the role that dynamics of bias and discrimination play in the development and deployment of FDRTs. I examine how historical patterns of discrimination have made inroads into the design and implementation of FDRTs from their very earliest moments. And, I explain the ways in which the use of biased FDRTs can lead distributional and recognitional injustices. I also describe how certain complacent attitudes of innovators and users toward redressing these harms raise serious concerns about expanding future adoption. The explainer concludes with an exploration of broader ethical questions around the potential proliferation of pervasive face-based surveillance infrastructures and makes some recommendations for cultivating more responsible approaches to the development and governance of these technologies….(More)”.
High-Stakes AI Decisions Need to Be Automatically Audited
Oren Etzioni and Michael Li in Wired: “…To achieve increased transparency, we advocate for auditable AI, an AI system that is queried externally with hypothetical cases. Those hypothetical cases can be either synthetic or real—allowing automated, instantaneous, fine-grained interrogation of the model. It’s a straightforward way to monitor AI systems for signs of bias or brittleness: What happens if we change the gender of a defendant? What happens if the loan applicants reside in a historically minority neighborhood?
Auditable AI has several advantages over explainable AI. Having a neutral third-party investigate these questions is a far better check on bias than explanations controlled by the algorithm’s creator. Second, this means the producers of the software do not have to expose trade secrets of their proprietary systems and data sets. Thus, AI audits will likely face less resistance.
Auditing is complementary to explanations. In fact, auditing can help to investigate and validate (or invalidate) AI explanations. Say Netflix recommends The Twilight Zone because I watched Stranger Things. Will it also recommend other science fiction horror shows? Does it recommend The Twilight Zone to everyone who’s watched Stranger Things?
Early examples of auditable AI are already having a positive impact. The ACLU recently revealed that Amazon’s auditable facial-recognition algorithms were nearly twice as likely to misidentify people of color. There is growing evidence that public audits can improve model accuracy for under-represented groups.
In the future, we can envision a robust ecosystem of auditing systems that provide insights into AI. We can even imagine “AI guardians” that build external models of AI systems based on audits. Instead of requiring AI systems to provide low-fidelity explanations, regulators can insist that AI systems used for high-stakes decisions provide auditing interfaces.
Auditable AI is not a panacea. If an AI system is performing a cancer diagnostic, the patient will still want an accurate and understandable explanation, not just an audit. Such explanations are the subject of ongoing research and will hopefully be ready for commercial use in the near future. But in the meantime, auditable AI can increase transparency and combat bias….(More)”.
When Do We Trust AI’s Recommendations More Than People’s?
Chiara Longoni and Luca Cian at Harvard Business School: “More and more companies are leveraging technological advances in machine learning, natural language processing, and other forms of artificial intelligence to provide relevant and instant recommendations to consumers. From Amazon to Netflix to REX Real Estate, firms are using AI recommenders to enhance the customer experience. AI recommenders are also increasingly used in the public sector to guide people to essential services. For example, the New York City Department of Social Services uses AI to give citizens recommendations on disability benefits, food assistance, and health insurance.
However, simply offering AI assistance won’t necessarily lead to more successful transactions. In fact, there are cases when AI’s suggestions and recommendations are helpful and cases when they might be detrimental. When do consumers trust the word of a machine, and when do they resist it? Our research suggests that the key factor is whether consumers are focused on the functional and practical aspects of a product (its utilitarian value) or focused on the experiential and sensory aspects of a product (its hedonic value).
In an article in the Journal of Marketing — based on data from over 3,000 people who took part in 10 experiments — we provide evidence supporting for what we call a word-of-machine effect: the circumstances in which people prefer AI recommenders to human ones.
The word-of-machine effect.
The word-of-machine effect stems from a widespread belief that AI systems are more competent than humans in dispensing advice when utilitarian qualities are desired and are less competent when the hedonic qualities are desired. Importantly, the word-of-machine effect is based on a lay belief that does not necessarily correspond to the reality. The fact of the matter is humans are not necessarily less competent than AI at assessing and evaluating utilitarian attributes. Vice versa, AI is not necessarily less competent than humans at assessing and evaluating hedonic attributes….(More)”.
UK passport photo checker shows bias against dark-skinned women
Maryam Ahmed at BBC News: “Women with darker skin are more than twice as likely to be told their photos fail UK passport rules when they submit them online than lighter-skinned men, according to a BBC investigation.
One black student said she was wrongly told her mouth looked open each time she uploaded five different photos to the government website.
This shows how “systemic racism” can spread, Elaine Owusu said.
The Home Office said the tool helped users get their passports more quickly.
“The indicative check [helps] our customers to submit a photo that is right the first time,” said a spokeswoman.
“Over nine million people have used this service and our systems are improving.
“We will continue to develop and evaluate our systems with the objective of making applying for a passport as simple as possible for all.”
Skin colour
The passport application website uses an automated check to detect poor quality photos which do not meet Home Office rules. These include having a neutral expression, a closed mouth and looking straight at the camera.
BBC research found this check to be less accurate on darker-skinned people.
More than 1,000 photographs of politicians from across the world were fed into the online checker.
The results indicated:
- Dark-skinned women are told their photos are poor quality 22% of the time, while the figure for light-skinned women is 14%
- Dark-skinned men are told their photos are poor quality 15% of the time, while the figure for light-skinned men is 9%
Photos of women with the darkest skin were four times more likely to be graded poor quality, than women with the lightest skin….(More)”.
Lessons learned from AI ethics principles for future actions
Paper by Merve Hickok: “As the use of artificial intelligence (AI) systems became significantly more prevalent in recent years, the concerns on how these systems collect, use and process big data also increased. To address these concerns and advocate for ethical and responsible development and implementation of AI, non-governmental organizations (NGOs), research centers, private companies, and governmental agencies published more than 100 AI ethics principles and guidelines. This first wave was followed by a series of suggested frameworks, tools, and checklists that attempt a technical fix to issues brought up in the high-level principles. Principles are important to create a common understanding for priorities and are the groundwork for future governance and opportunities for innovation. However, a review of these documents based on their country of origin and funding entities shows that private companies from US-West axis dominate the conversation. Several cases surfaced in the meantime which demonstrate biased algorithms and their impact on individuals and society. The field of AI ethics is urgently calling for tangible action to move from high-level abstractions and conceptual arguments towards applying ethics in practice and creating accountability mechanisms. However, lessons must be learned from the shortcomings of AI ethics principles to ensure the future investments, collaborations, standards, codes or legislation reflect the diversity of voices and incorporate the experiences of those who are already impacted by the biased algorithms….(More)”.