Why Are We Failing at AI Ethics?


Article by Anja Kaspersen and Wendell Wallach: “…Extremely troubling is the fact that the people who are most vulnerable to negative impacts from such rapid expansions of AI systems are often the least likely to be able to join the conversation about these systems, either because they have no or restricted digital access or their lack of digital literacy makes them ripe for exploitation.

Such vulnerable groups are often theoretically included in discussions, but not empowered to take a meaningful part in making decisions. This engineered inequity, alongside human biases, risks amplifying otherness through neglect, exclusion, misinformation, and disinformation.

Society should be deeply concerned that nowhere near enough substantive progress is being made to develop and scale actionable legal, ethical oversight while simultaneously addressing existing inequalities.

So, why hasn’t more been done? There are three main issues at play: 

First, many of the existing dialogues around the ethics of AI and governance are too narrow and fail to understand the subtleties and life cycles of AI systems and their impacts.

Often, these efforts focus only on the development and deployment stages of the technology life cycle, when many of the problems occur during the earlier stages of conceptualization, research, and design. Or they fail to comprehend when and if AI system operates at a level of maturity required to avoid failure in complex adaptive systems.

Or they focus on some aspects of ethics, while ignoring other aspects that are more fundamental and challenging. This is the problem known as “ethics washing” – creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns.

Let’s be clear: every choice entails tradeoffs. “Ethics talk” is often about underscoring the various tradeoffs entailed in differing courses of action. Once a course has been selected, comprehensive ethical oversight is also about addressing the considerations not accommodated by the options selected, which is essential to any future verification effort. This vital part of the process is often a stumbling block for those trying to address the ethics of AI.

The second major issue is that to date all the talk about ethics is simply that: talk. 

We’ve yet to see these discussions translate into meaningful change in managing the ways in which AI systems are being embedded into various aspect of our lives….

A third issue at play is that discussions on AI and ethics are still largely confined to the ivory tower.

There is an urgent need for more informed public discourse and serious investment in civic education around the societal impact of the bio-digital revolution. This could help address the first two problems, but most of what the general public currently perceives about AI comes from sci-fi tropes and blockbuster movies.

A few examples of algorithmic bias have penetrated the public discourse. But the most headline-grabbing research on AI and ethics tends to focus on far-horizon existential risks. More effort needs to be invested in communicating to the public that, beyond the hypothetical risks of future AI, there are real and imminent risks posed by why and how we embed AI systems that currently shape everyone’s daily lives….(More)”.

Can data die?


Article by Jennifer Ding: “…To me, the crux of the Lenna story is how little power we have over our data and how it is used and abused. This threat seems disproportionately higher for women who are often overrepresented in internet content, but underrepresented in internet company leadership and decision making. Given this reality, engineering and product decisions will continue to consciously (and unconsciously) exclude our needs and concerns.

While social norms are changing towards non-consensual data collection and data exploitation, digital norms seem to be moving in the opposite direction. Advancements in machine learning algorithms and data storage capabilities are only making data misuse easier. Whether the outcome is revenge porn or targeted ads, surveillance or discriminatory AI, if we want a world where our data can retire when it’s outlived its time, or when it’s directly harming our lives, we must create the tools and policies that empower data subjects to have a say in what happens to their data… including allowing their data to die…(More)”

International Network on Digital Self Determination


About: “Data is changing how we live and engage with and within our societies and our economies. As our digital footprints grow, how do we re-imagine ourselves in the digital world? How will we be able to determine the data-driven decisions that impact us?

Digital self-determination offers a unique way of understanding where we (can) live in the digital space – how we manage our social media environments, our interaction with Artificial Intelligence (AI) and other technologies, how we access and operate our personal data, and the ways in which we can have a say about mass data sharing.

Through this network, we aim to study and design ways to engage in trustworthy data spaces and ensure human centric approaches. We recognize an urgent need to ensure people’s digital self-determination so that ‘humans in the loop’ is not just a catch-phrase but a lived experience both at the individual and societal level….(More)”.

Nonprofit Websites Are Riddled With Ad Trackers


Article by By Alfred Ng and Maddy Varner: “Last year, nearly 200 million people visited the website of Planned Parenthood, a nonprofit that many people turn to for very private matters like sex education, access to contraceptives, and access to abortions. What those visitors may not have known is that as soon as they opened plannedparenthood.org, some two dozen ad trackers embedded in the site alerted a slew of companies whose business is not reproductive freedom but gathering, selling, and using browsing data.

The Markup ran Planned Parenthood’s website through our Blacklight tool and found 28 ad trackers and 40 third-party cookies tracking visitors, in addition to so-called “session recorders” that could be capturing the mouse movements and keystrokes of people visiting the homepage in search of things like information on contraceptives and abortions. The site also contained trackers that tell Facebook and Google if users visited the site.

The Markup’s scan found Planned Parenthood’s site communicating with companies like Oracle, Verizon, LiveRamp, TowerData, and Quantcast—some of which have made a business of assembling and selling access to masses of digital data about people’s habits.

Katie Skibinski, vice president for digital products at Planned Parenthood, said the data collected on its website is “used only for internal purposes by Planned Parenthood and our affiliates,” and the company doesn’t “sell” data to third parties.

“While we aim to use data to learn how we can be most impactful, at Planned Parenthood, data-driven learning is always thoughtfully executed with respect for patient and user privacy,” Skibinski said. “This means using analytics platforms to collect aggregate data to gather insights and identify trends that help us improve our digital programs.”

Skibinski did not dispute that the organization shares data with third parties, including data brokers.

Blacklight scan of Planned Parenthood Gulf Coast—a localized website specifically for people in the Gulf region, including Texas, where abortion has been essentially outlawed—churned up similar results.

Planned Parenthood is not alone when it comes to nonprofits, some operating in sensitive areas like mental health and addiction, gathering and sharing data on website visitors.

Using our Blacklight tool, The Markup scanned more than 23,000 websites of nonprofit organizations, including those belonging to abortion providers and nonprofit addiction treatment centers. The Markup used the IRS’s nonprofit master file to identify nonprofits that have filed a tax return since 2019 and that the agency categorizes as focusing on areas like mental health and crisis intervention, civil rights, and medical research. We then examined each nonprofit’s website as publicly listed in GuideStar. We found that about 86 percent of them had third-party cookies or tracking network requests. By comparison, when The Markup did a survey of the top 80,000 websites in 2020, we found 87 percent used some type of third-party tracking.

About 11 percent of the 23,856 nonprofit websites we scanned had a Facebook pixel embedded, while 18 percent used the Google Analytics “Remarketing Audiences” feature.

The Markup found that 439 of the nonprofit websites loaded scripts called session recorders, which can monitor visitors’ clicks and keystrokes. Eighty-nine of those were for websites that belonged to nonprofits that the IRS categorizes as primarily focusing on mental health and crisis intervention issues…(More)”.

Data Science for Social Good: Philanthropy and Social Impact in a Complex World


Book edited by Ciro Cattuto and Massimo Lapucci: “This book is a collection of insights by thought leaders at first-mover organizations in the emerging field of “Data Science for Social Good”. It examines the application of knowledge from computer science, complex systems, and computational social science to challenges such as humanitarian response, public health, and sustainable development. The book provides an overview of scientific approaches to social impact – identifying a social need, targeting an intervention, measuring impact – and the complementary perspective of funders and philanthropies pushing forward this new sector.

TABLE OF CONTENTS


Introduction; By Massimo Lapucci

The Value of Data and Data Collaboratives for Good: A Roadmap for Philanthropies to Facilitate Systems Change Through Data; By Stefaan G. Verhulst

UN Global Pulse: A UN Innovation Initiative with a Multiplier Effect; By Dr. Paula Hidalgo-Sanchis

Building the Field of Data for Good; By Claudia Juech

When Philanthropy Meets Data Science: A Framework for Governance to Achieve Data-Driven Decision-Making for Public Good; By Nuria Oliver

Data for Good: Unlocking Privately-Held Data to the Benefit of the Many; By Alberto Alemanno

Building a Funding Data Ecosystem: Grantmaking in the UK; By Rachel Rank

A Reflection on the Role of Data for Health: COVID-19 and Beyond; By Stefan E. Germann and Ursula Jasper….(More)”

Listening to Users and Other Ideas for Building Trust in Digital Trade


Paper by Susan Ariel Aaronson: “This paper argues that if trade policymakers truly want to achieve data free flow with trust, they must address user concerns beyond privacy. Survey data reveals that users are also anxious about online harassment, malware, censorship and disinformation. The paper focuses on three such problems, specifically, internet shutdowns, censorship and ransomware (a form of malware), each of which can distort trade and make users feel less secure online. Finally, the author concludes that trade policymakers will need to rethink how they involve the broad public in digital trade policymaking if they want digital trade agreements to facilitate trust….(More)”.

UN urges moratorium on use of AI that imperils human rights


Jamey Keaten and Matt O’Brien at the Washington Post: “The U.N. human rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.

Michelle Bachelet, the U.N. High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications which don’t comply with international human rights law.

Applications that should be prohibited include government “social scoring” systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.

AI-based technologies can be a force for good but they can also “have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said in a statement.

Her comments came along with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.

“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told journalists as she presented the report in Geneva. “It’s about recognizing that if AI is going to be used in these human rights — very critical — function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”

Bachelet didn’t call for an outright ban of facial recognition technology, but said governments should halt the scanning of people’s features in real time until they can show the technology is accurate, won’t discriminate and meets certain privacy and data protection standards….(More)” (Report).

The Secret Bias Hidden in Mortgage-Approval Algorithms


An investigation by The Markup: “…has found that lenders in 2019 were more likely to deny home loans to people of color than to White people with similar financial characteristics—even when we controlled for newly available financial factors that the mortgage industry for years has said would explain racial disparities in lending.

Holding 17 different factors steady in a complex statistical analysis of more than two million conventional mortgage applications for home purchases, we found that lenders were 40 percent more likely to turn down Latino applicants for loans, 50 percent more likely to deny Asian/Pacific Islander applicants, and 70 percent more likely to deny Native American applicants than similar White applicants. Lenders were 80 percent more likely to reject Black applicants than similar White applicants. These are national rates.

In every case, the prospective borrowers of color looked almost exactly the same on paper as the White applicants, except for their race.

The industry had criticized previous similar analyses for not including financial factors they said would explain disparities in lending rates but were not public at the time: debts as a percentage of income, how much of the property’s assessed worth the person is asking to borrow, and the applicant’s credit score.

The first two are now public in the Home Mortgage Disclosure Act data. Including these financial data points in our analysis not only failed to eliminate racial disparities in loan denials, it highlighted new, devastating ones.

We found that lenders gave fewer loans to Black applicants than White applicants even when their incomes were high—$100,000 a year or more—and had the same debt ratios. In fact, high-earning Black applicants with less debt were rejected more often than high-earning White applicants who have more debt….(More)”

Perspectives on Digital Humanism


Open Access Book edited by Hannes Werthner, Erich Prem, Edward A. Lee, and Carlo Ghezzi: “Digital Humanism is young; it has evolved from an unease about the consequences of a digitized world for human beings, into an internationally connected community that aims at developing concepts to provide a positive and constructive response. Following up on several successful workshops and a lecture series that bring together authorities of the various disciplines, this book is our latest contribution to the ongoing international discussions and developments. We have compiled a collection of 46 articles from experts with different disciplinary and institutional backgrounds, who provide their view on the interplay of human and machine.

Please note our open access publishing strategy for this book to enable widespread circulation and accessibility. This means that you can make use of the content freely, as long as you ensure appropriate referencing. At the same time, the book is also published in printed and online versions by Springer….(More)”.

Privacy Tradeoffs: Who Should Make Them, and How?


Paper by Jane R. Bambauer: “Privacy debates are contentious in part because we have not reached a broadly recognized cultural consensus about whether interests in privacy are like most other interests that can be traded off in utilitarian, cost-benefit terms, or if instead privacy is different—fundamental to conceptions of dignity and personal liberty. Thus, at the heart of privacy debates is an unresolved question: is privacy just another interest that can and should be bartered, mined, and used in the economy, or is it different?

This question identifies and isolates a wedge between those who hold essentially utilitarian views of ethics (and who would see many data practices as acceptable) and those who hold views of natural and fundamental rights (for whom common data mining practices are either never acceptable or, at the very least, never acceptable without significant participation and consent of the subject).

This essay provides an intervention of a purely descriptive sort. First, I lay out several candidates for ethical guidelines that might legitimately undergird privacy law and policy. Only one of the ethical models (the natural right to sanctuary) can track the full scope and implications of fundamental rights-based privacy laws like the GDPR.

Second, the project contributes to the field of descriptive ethics by using a vignette experiment to discover which of the various ethical models people actually do seem to hold and abide by. The vignette study uses a factorial design to help isolate the roles of various factors that may contribute to the respondents’ gauge of what an ethical firm should or should not do in the context of personal data use as well as two other non-privacy-related contexts. The results can shed light on whether privacy-related ethics are different and distinct from business ethics more generally. They also illuminate which version(s) of “good” and “bad” share broad support and deserve to be reflected in privacy law or business practice.

The results of the vignette experiment show that on balance, Americans subscribe to some form of utilitarianism, although a substantial minority subscribe to a natural right to sanctuary approach. Thus, consent and prohibitions of data practices are appropriate where the likely risks to some groups (most importantly, data subjects, but also firms and third parties) outweigh the benefits….(More)”