‘There is no standard’: investigation finds AI algorithms objectify women’s bodies


Article by Hilke Schellmann: “Images posted on social media are analyzed by artificial intelligence (AI) algorithms that decide what to amplify and what to suppress. Many of these algorithms, a Guardian investigation has found, have a gender bias, and may have been censoring and suppressing the reach of countless photos featuring women’s bodies.

These AI tools, developed by large technology companies, including Google and Microsoft, are meant to protect users by identifying violent or pornographic visuals so that social media companies can block it before anyone sees it. The companies claim that their AI tools can also detect “raciness” or how sexually suggestive an image is. With this classification, platforms – including Instagram and LinkedIn – may suppress contentious imagery.

Two Guardian journalists used the AI tools to analyze hundreds of photos of men and women in underwear, working out, using medical tests with partial nudity and found evidence that the AI tags photos of women in everyday situations as sexually suggestive. They also rate pictures of women as more “racy” or sexually suggestive than comparable pictures of men. As a result, the social media companies that leverage these or similar algorithms have suppressed the reach of countless images featuring women’s bodies, and hurt female-led businesses – further amplifying societal disparities.

Even medical pictures are affected by the issue. The AI algorithms were tested on images released by the US National Cancer Institute demonstrating how to do a clinical breast examination. Google’s AI gave this photo the highest score for raciness, Microsoft’s AI was 82% confident that the image was “explicitly sexual in nature”, and Amazon classified it as representing “explicit nudity”…(More)”.

Privacy


Book edited by Carissa Veliz and Steven M. Cahn: “Companies collect and share much of your daily life, from your location and search history, to your likes, habits, and relationships. As more and more of our personal data is collected, analyzed, and distributed, we need to think carefully about what we might be losing when we give up our privacy.

Privacy is a thought-provoking collection of philosophical essays on privacy, offering deep insights into the nature of privacy, its value, and the consequences of its loss. Bringing together both classic and contemporary work, this timely volume explores the theories, issues, debates, and applications of the philosophical study of privacy. The essays address concealment and exposure, the liberal value of privacy, privacy in social media, privacy rights and public information, privacy and the limits of law, and more…(More)”.

AI governance and human rights: Resetting the relationship


Paper by Kate Jones: “Governments and companies are already deploying AI to assist in making decisions that can have major consequences for the lives of individual citizens and societies. AI offers far-reaching benefits for human development but also presents risks. These include, among others, further division between the privileged and the unprivileged; erosion of individual freedoms through surveillance; and the replacement of independent thought and judgement with automated control.

Human rights are central to what it means to be human. They were drafted and agreed, with worldwide popular support, to define freedoms and entitlements that would allow every human being to live a life of liberty and dignity. AI, its systems and its processes have the potential to alter the human experience fundamentally. But many sets of AI governance principles produced by companies, governments, civil society and international organizations do not mention human rights at all. This is an error that requires urgent correction.

This research paper aims to dispel myths about human rights; outline the principal importance of human rights for AI governance; and recommend actions that governments, organizations, companies and individuals can take to ensure that human rights are the foundation for AI governance in future…(More)”.

The Signal App and the Danger of Privacy at All Costs


Article by Reid Blackman: “…One should always worry when a person or an organization places one value above all. The moral fabric of our world is complex. It’s nuanced. Sensitivity to moral nuance is difficult, but unwavering support of one principle to rule them all is morally dangerous.

The way Signal wields the word “surveillance” reflects its coarsegrained understanding of morality. To the company, surveillance covers everything from a server holding encrypted data that no one looks at to a law enforcement agent reading data after obtaining a warrant to East Germany randomly tapping citizens’ phones. One cannot think carefully about the value of privacy — including its relative importance to other values in particular contexts — with such a broad brush.

What’s more, the company’s proposition that if anyone has access to data, then many unauthorized people probably will have access to that data is false. This response reflects a lack of faith in good governance, which is essential to any well-functioning organization or community seeking to keep its members and society at large safe from bad actors. There are some people who have access to the nuclear launch codes, but “Mission Impossible” movies aside, we’re not particularly worried about a slippery slope leading to lots of unauthorized people having access to those codes.

I am drawing attention to Signal, but there’s a bigger issue here: Small groups of technologists are developing and deploying applications of their technologies for explicitly ideological reasons, with those ideologies baked into the technologies. To use those technologies is to use a tool that comes with an ethical or political bent.

Signal is pushing against businesses like Meta that turn users of their social media platforms into the product by selling user data. But Signal embeds within itself a rather extreme conception of privacy, and scaling its technology is scaling its ideology. Signal’s users may not be the product, but they ‌‌are the witting or unwitting advocates of the moral views of the 40 or so people who operate Signal.

There’s something somewhat sneaky in all this (though I don’t think the owners of Signal intend to be sneaky). Usually advocates know that they’re advocates. They engage in some level of deliberation and reach the conclusion that a set of beliefs is for them…(More)”.

The ethics of artificial intelligence, UNESCO and the African Ubuntu perspective


Paper by Dorine Eva van Norren: “This paper aims to demonstrate the relevance of worldviews of the global south to debates of artificial intelligence, enhancing the human rights debate on artificial intelligence (AI) and critically reviewing the paper of UNESCO Commission on the Ethics of Scientific Knowledge and Technology (COMEST) that preceded the drafting of the UNESCO guidelines on AI. Different value systems may lead to different choices in programming and application of AI. Programming languages may acerbate existing biases as a people’s worldview is captured in its language. What are the implications for AI when seen from a collective ontology? Ubuntu (I am a person through other persons) starts from collective morals rather than individual ethics…

Metaphysically, Ubuntu and its conception of social personhood (attained during one’s life) largely rejects transhumanism. When confronted with economic choices, Ubuntu favors sharing above competition and thus an anticapitalist logic of equitable distribution of AI benefits, humaneness and nonexploitation. When confronted with issues of privacy, Ubuntu emphasizes transparency to group members, rather than individual privacy, yet it calls for stronger (group privacy) protection. In democratic terms, it promotes consensus decision-making over representative democracy. Certain applications of AI may be more controversial in Africa than in other parts of the world, like care for the elderly, that deserve the utmost respect and attention, and which builds moral personhood. At the same time, AI may be helpful, as care from the home and community is encouraged from an Ubuntu perspective. The report on AI and ethics of the UNESCO World COMEST formulated principles as input, which are analyzed from the African ontological point of view. COMEST departs from “universal” concepts of individual human rights, sustainability and good governance which are not necessarily fully compatible with relatedness, including future and past generations. Next to rules based approaches, which may hamper diversity, bottom-up approaches are needed with intercultural deep learning algorithms…(More)”.

Filling Public Data Gaps


Report by Judah Axelrod, Karolina Ramos, and Rebecca Bullied: “Data are central to understanding the lived experiences of different people and communities and can serve as a powerful force for promoting racial equity. Although public data, including foundational sources for policymaking such as the US Census Bureau’s American Community Survey (ACS), offer accessible information on a range of topics, challenges of timeliness, granularity, representativeness, and degrees of disaggregation can limit those data’s utility for real-time analysis. Private data—data produced by private-sector organizations either through standard business or to market as an asset for purchase—can serve as a richer, more granular, and higher-frequency supplement or alternative to public data sources. This raises questions about how well private data assets can offer race-disaggregated insights that can inform policymaking.

In this report, we explore the current landscape of public-private data sharing partnerships that address topic areas where racial equity research faces data gaps: wealth and assets, financial well-being and income, and employment and job quality. We held 20 semistructured interviews with current producers and users of private-sector data and subject matter experts in the areas of data-sharing models and ethical data usage. Our findings are divided into five key themes:

  • Incentives and disincentives, benefits, and risks to public-private data sharing
    Agreements with prestigious public partners can bolster credibility for private firms and broaden their customer base, while public partners benefit from access to real-time, granular, rich data sources. But data sharing is often time and labor intensive, and firms can be concerned with conflicting business interests or diluting the value of proprietary data assets.
  • Availability of race-disaggregated data sources
    We found no examples in our interviews of race-disaggregated data sources related to our thematic focus areas that are available externally. However, there are promising methods for data imputation, linkage, and augmentation through internal surveys.
  • Data collaboratives in practice
    Most public-private data sharing agreements we learned about are between two parties and entail free or “freemium” access. However, we found promising examples of multilateral agreements that diversify the data-sharing landscape.
  • From data champions to data stewards
    We found many examples of informal data champions who bear responsibility for relationship-building and securing data partnerships. This role has yet to mature to an institutionalized data steward within private firms we interviewed, which can make data sharing a fickle process.
  • Considerations for ethical data usage
    Data privacy and transparency about how data are accessed and used are prominent concerns among prospective data users. Interviewees also stressed the importance of not privileging existing quantitative data above qualitative insights in cases where communities have offered long-standing feedback and narratives about their own experiences facing racial inequities, and that policymakers should not use a need to collect more data as an excuse for delaying policy action.

Our research yielded several recommendations for data producers and users that engage in data sharing, and for funders seeking to advance data-sharing efforts and promote racial equity…(More)”

Operationalizing Digital Self Determination


Paper by Stefaan G. Verhulst: “We live in an era of datafication, one in which life is increasingly quantified and transformed into intelligence for private or public benefit. When used responsibly, this offers new opportunities for public good. However, three key forms of asymmetry currently limit this potential, especially for already vulnerable and marginalized groups: data asymmetries, information asymmetries, and agency asymmetries. These asymmetries limit human potential, both in a practical and psychological sense, leading to feelings of disempowerment and eroding public trust in technology. Existing methods to limit asymmetries (e.g., consent) as well as some alternatives under consideration (data ownership, collective ownership, personal information management systems) have limitations to adequately address the challenges at hand. A new principle and practice of digital self-determination (DSD) is therefore required.
DSD is based on existing concepts of self-determination, as articulated in sources as varied as Kantian philosophy and the 1966 International Covenant on Economic, Social and Cultural Rights. Updated for the digital age, DSD contains several key characteristics, including the fact that it has both an individual and collective dimension; is designed to especially benefit vulnerable and marginalized groups; and is context-specific (yet also enforceable). Operationalizing DSD in this (and other) contexts so as to maximize the potential of data while limiting its harms requires a number of steps. In particular, a responsible operationalization of DSD would consider four key prongs or categories of action: processes, people and organizations, policies, and products and technologies…(More)”.

Digital rights and principles: a digital transformation for EU citizens


Press Release: “The Commission welcomes the agreement reached yesterday with the Parliament and the Council on the European declaration on digital rights and principles. The declaration, proposed in January, establishes a clear reference point about the kind of human-centred digital transformation that the EU promotes and defends, at home and abroad.

graphic showing a circle with text Your Digital Principles and different icons with a text below the circle At the heart of Europe's digital transformation

It builds on key EU values and freedoms and will benefit all individuals and businesses. The declaration will also provide a guide for policymakers and companies when dealing with new technologies. The declaration focuses on six key areas: putting people at the centre of the digital transformation; solidarity and inclusion; freedom of choice; participation in digital life; safety and security; and sustainability…(More)” See also: European Digital Rights and Principles

Vulnerable People and Data Protection Law


Book by Gianclaudio Malgieri: “Human vulnerability has traditionally been viewed through the lens of specific groups of people, such as ethnic minorities, children, the elderly, or people with disabilities. With the rise of digital media, our perceptions of vulnerable groups and individuals have been reshaped as new vulnerabilities and different vulnerable sub-groups of users, consumers, citizens, and data subjects emerge.

Vulnerable People and Data Protection Law not only depicts these problems but offers the reader a detailed investigation of the concept of data subjects and a reconceptualisation of the notion of vulnerability within the General Data Protection Regulation. The regulation offers a forward-facing set of tools that – though largely underexplored – are essential in rebalancing power asymmetries and mitigating induced vulnerabilities in the age of artificial intelligence.

This book proposes a layered approach to data subject definition. Considering the new potentialities of the digital market, the new awareness about cognitive weaknesses, and the new philosophical sensitivity about vulnerability conditions, the author looks for a more general definition of vulnerability that goes beyond traditional labels. In doing so, he seeks to promote a ‘vulnerability-aware’ interpretation of the GDPR.

A heuristic analysis that re-interprets the whole GDPR, this work is a must-read for both scholars of data protection law and for policymakers looking to strengthen regulations and protect the data of vulnerable individuals…(More)”.

Digitization, Surveillance, Colonialism


Essay by Carissa Veliz: “As I write these words, articles are mushrooming in newspapers and magazines about how privacy is more important than ever after the Supreme Court ruling that has overturned the constitutionality of the right to have an abortion in the United States. In anti-abortion states, browsing histories, text messages, location data, payment data, and information from period-tracking apps can all be used to prosecute both women seeking an abortion and anyone aiding them. The National Right to Life Committee recently published policy recommendations for anti-abortion states that include criminal penalties for people who provide information about self-managed abortions, whether over the phone or online. Women considering an abortion are often in distress, and now they cannot even reach out to friends or family without endangering themselves and others. 

So far, Texas, Oklahoma, and Idaho have passed citizen-enforced abortion bans, according to which anyone can file a civil lawsuit to report an abortion and have the chance of winning at least ten thousand dollars. This is an incredible incentive to use personal data towards for-profit witch-hunting. Anyone can buy personal data from data brokers and fish for suspicious behavior. The surveillance machinery that we have built in the past two decades can now be put to use by authorities and vigilantes to criminalize pregnant women and their doctors, nurses, pharmacists, friends, and family. How productive.

It is not true, however, that the overturning of Roe v. Wade has made privacy more important than ever. Rather, it has provided yet another illustration of why privacy has always been and always will be important. That it is happening in the United States is helpful, because human beings are prone to thinking that whatever happens “over there” — say, in China now, or in East Germany during the Cold War  to those “other people,” doesn’t happen to us — until it does. 

Privacy is important because it protects us from possible abuses of power. As long as human beings are human beings and organizations are organizations, abuses of power will be a constant temptation and threat. That is why it is supremely reckless to build a surveillance architecture. You never know when that data might be used against you — but you can be fairly confident that sooner or later it will be used against you. Collecting personal data might be convenient, but it is also a ticking bomb; it amounts to sensitive material waiting for the chance to turn into an instance of public shaming, extortion, persecution, discrimination, or identity theft. Do you think you have nothing to hide? So did many American women on June 24, only to realize that week that their period was late. You have plenty to hide — you just don’t know what it is yet and whom you should hide it from.

In the digital age, the challenge of protecting privacy is more formidable than most people imagine — but it is nowhere near impossible, and every bit worth putting up a fight for, if you care about democracy or freedom. The challenge is this: the dogma of our time is to turn analog into digital, and as things stand today, digitization is tantamount to surveillance…(More)”.