Paper by Steven Bird: “How do we roll out language technologies across a world with 7,000 languages? In one story, we scale the successes of NLP further into ‘low-resource’ languages, doing ever more with less. However, this approach does not recognise the fact that – beyond the 500 institutional languages – the remaining languages are oral vernaculars. These speech communities interact with the outside world using a ‘con-
tact language’. I argue that contact languages are the appropriate target for technologies like speech recognition and machine translation, and that the 6,500 oral vernaculars should be approached differently. I share stories from an Indigenous community where local people reshaped an extractive agenda to align with their relational agenda. I describe the emerging paradigm of Relational NLP and explain how it opens the way to non-extractive methods and to solutions that enhance human agency…(More)”
My Voice, Your Voice, Our Voice: Attitudes Towards Collective Governance of a Choral AI Dataset
Paper by Jennifer Ding, Eva Jäger, Victoria Ivanova, and Mercedes Bunz: “Data grows in value when joined and combined; likewise the power of voice grows in ensemble. With 15 UK choirs, we explore opportunities for bottom-up data governance of a jointly created Choral AI Dataset. Guided by a survey of chorister attitudes towards generative AI models trained using their data, we explore opportunities to create empowering governance structures that go beyond opt in and opt out. We test the development of novel mechanisms such as a Trusted Data Intermediary (TDI) to enable governance of the dataset amongst the choirs and AI developers. We hope our findings can contribute to growing efforts to advance collective data governance practices and shape a more creative, empowering future for arts communities in the generative AI ecosystem…(More)”.
Revealed: bias found in AI system used to detect UK benefits fraud
Article by Robert Booth: “An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality, the Guardian can reveal.
An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.
The admission was made in documents released under the Freedom of Information Act by the Department for Work and Pensions (DWP). The “statistically significant outcome disparity” emerged in a “fairness analysis” of the automated system for universal credit advances carried out in February this year.
The emergence of the bias comes after the DWP this summer claimed the AI system “does not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers”.
This assurance came in part because the final decision on whether a person gets a welfare payment is still made by a human, and officials believe the continued use of the system – which is attempting to help cut an estimated £8bn a year lost in fraud and error – is “reasonable and proportionate”.
But no fairness analysis has yet been undertaken in respect of potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status, the disclosures reveal.
Campaigners responded by accusing the government of a “hurt first, fix later” policy and called on ministers to be more open about which groups were likely to be wrongly suspected by the algorithm of trying to cheat the system…(More)”.
Online consent: how much do we need to know?
Paper by Bartlomiej Chomanski & Lode Lauwaert: “When you visit a website and click a button that says, ‘I agree to these terms’—do you really agree? Many scholars who consider this question (Solove 2013; Barocas & Nissenbaum 2014; Hull 2015; Pascalev 2017; Yeung 2017; Becker 2019; Zuboff 2019; Andreotta et al. 2022; Wolmarans and Vorhoeve 2022) would tend to answer ‘no’—or, at the very least, they would deem your agreement normatively deficient. The reasoning behind that conclusion is in large part driven by the claim that when most people click ‘I agree’ when visiting online websites and platforms, they do not really know what they are agreeing to. Their lack of knowledge about the privacy policy and other terms of the online agreements thus makes their consent problematic in morally salient ways.
We argue that this prevailing view is wrong. Uninformed consent to online terms and conditions (what we will call, for short, ‘online consent’) is less ethically problematic than many scholars suppose. Indeed, we argue that uninformed online consent preceded by the legitimate exercise of the right not to know (RNTK, to be explained below) is prima facie valid and does not appear normatively deficient in other ways, despite being uninformed.
The paper proceeds as follows. In Sect. 2, we make more precise the concept of online consent and summarize the case against it, as presented in the literature. In Sect. 3 we explain the arguments for the RNTK in bioethics and show that analogous reasoning leads to endorsing the RNTK in online contexts. In Sect. 4, we demonstrate that the appeal to the RNTK helps defuse the critics’ arguments against online consent. Section 5 concludes: online consent is valid (with caveats, to be explored in what follows)…(More)”
No Escape: The Weaponization of Gender for the Purposes of Digital Transnational Repression
Report by Citizen Lab: “…we examine the rising trend of gender-based digital transnational repression (DTR), which specifically targets women human rights defenders in exile or in the diaspora, using gender-specific digital tactics aimed at silencing and disabling their voices. Our research draws on the lived experiences of 85 women human rights defenders, originating from 24 home countries and residing in 23 host countries, to help us understand how gender and sexuality play a central role in digital transnational repression…(More)”.
Congress should designate an entity to oversee data security, GAO says
Article by Matt Bracken: “Federal agencies may need to rethink how they handle individuals’ personal data to protect their civil rights and civil liberties, a congressional watchdog said in a new report Tuesday.
Without federal guidance governing the protection of the public’s civil rights and liberties, agencies have pursued a patchwork system of policies tied to the collection, sharing and use of data, the Government Accountability Office said.
To address that problem head-on, the GAO is recommending that Congress select “an appropriate federal entity” to produce guidance or regulations regarding data protection that would apply to all agencies, giving that entity “the explicit authority to make needed technical and policy choices or explicitly stating Congress’s own choices.”
That recommendation was formed after the GAO sent a questionnaire to all 24 Chief Financial Officers Act agencies asking for information about their use of emerging technologies and data capabilities and how they’re guaranteeing that personally identifiable information is safeguarded.
The GAO found that 16 of those CFO Act agencies have policies or procedures in place to protect civil rights and civil liberties with regard to data use, while the other eight have not taken steps to do the same.
The most commonly cited issues for agencies in their efforts to protect the civil rights and civil liberties of the public were “complexities in handling protections associated with new and emerging technologies” and “a lack of qualified staff possessing needed skills in civil rights, civil liberties, and emerging technologies.”
“Further, eight of the 24 agencies believed that additional government-wide law or guidance would strengthen consistency in addressing civil rights and civil liberties protections,” the GAO wrote. “One agency noted that such guidance could eliminate the hodge-podge approach to the governance of data and technology.”
All 24 CFO Act agencies have internal offices to “handle the protection of the public’s civil rights as identified in federal laws,” with much of that work centered on the handling of civil rights violations and related complaints. Four agencies — the departments of Defense, Homeland Security, Justice and Education — have offices to specifically manage civil liberty protections across their entire agencies. The other 20 agencies have mostly adopted a “decentralized approach to protecting civil liberties, including when collecting, sharing, and using data,” the GAO noted…(More)”.
The New Artificial Intelligentsia
Essay by Ruha Benjamin: “In the Fall of 2016, I gave a talk at the Institute for Advanced Study in Princeton titled “Are Robots Racist?” Headlines such as “Can Computers Be Racist? The Human-Like Bias of Algorithms,” “Artificial Intelligence’s White Guy Problem,” and “Is an Algorithm Any Less Racist Than a Human?” had captured my attention in the months before. What better venue to discuss the growing concerns about emerging technologies, I thought, than an institution established during the early rise of fascism in Europe, which once housed intellectual giants like J. Robert Oppenheimer and Albert Einstein, and prides itself on “protecting and promoting independent inquiry.”
My initial remarks focused on how emerging technologies reflect and reproduce social inequities, using specific examples of what some termed “algorithmic discrimination” and “machine bias.” A lively discussion ensued. The most memorable exchange was with a mathematician who politely acknowledged the importance of the issues I raised but then assured me that “as AI advances, it will eventually show us how to address these problems.” Struck by his earnest faith in technology as a force for good, I wanted to sputter, “But what about those already being harmed by the deployment of experimental AI in healthcare, education, criminal justice, and more—are they expected to wait for a mythical future where sentient systems act as sage stewards of humanity?”
Fast-forward almost 10 years, and we are living in the imagination of AI evangelists racing to build artificial general intelligence (AGI), even as they warn of its potential to destroy us. This gospel of love and fear insists on “aligning” AI with human values to rein in these digital deities. OpenAI, the company behind ChatGPT, echoed the sentiment of my IAS colleague: “We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems.” They envision a time when, eventually, “our AI systems can take over more and more of our alignment work and ultimately conceive, implement, study, and develop better alignment techniques than we have now. They will work together with humans to ensure that their own successors are more aligned with humans.” For many, this is not reassuring…(More)”.
Advancing Data Equity: An Action-Oriented Framework
WEF Report: “Automated decision-making systems based on algorithms and data are increasingly common today, with profound implications for individuals, communities and society. More than ever before, data equity is a shared responsibility that requires collective action to create data practices and systems that promote fair and just outcomes for all.
This paper, produced by members of the Global Future Council on Data Equity, proposes a data equity definition and framework for inquiry that spurs ongoing dialogue and continuous action towards implementing data equity in organizations. This framework serves as a dynamic tool for stakeholders committed to operationalizing data equity, across various sectors and regions, given the rapidly evolving data and technology landscapes…(More)”.
Geographies of missing data: Spatializing counterdata production against feminicide
Paper by Catherine D’Ignazio et al: “Feminicide is the gender-related killing of cisgender and transgender women and girls. It reflects patriarchal and racialized systems of oppression and reveals how territories and socio-economic landscapes configure everyday gender-related violence. In recent decades, many grassroots data production initiatives have emerged with the aim of monitoring this extreme but invisibilized phenomenon. We bridge scholarship in feminist and information geographies with data feminism to examine the ways in which space, broadly defined, shapes the counterdata production strategies of feminicide data activists. Drawing on a qualitative study of 33 monitoring efforts led by civil society organizations across 15 countries, primarily in Latin America, we provide a conceptual framework for examining the spatial dimensions of data activism. We show how there are striking transnational patterns related to where feminicide goes unrecorded, resulting in geographies of missing data. In response to these omissions, activists deploy multiple spatialized strategies to make these geographies visible, to situate and contextualize each case of feminicide, to reclaim databases as spaces for memory and witnessing, and to build transnational networks of solidarity. In this sense, we argue that data activism about feminicide constitutes a space of resistance and resignification of everyday forms of gender-related violence…(More)”.
Generative Discrimination: What Happens When Generative AI Exhibits Bias, and What Can Be Done About It
Paper by Philipp Hacker, Frederik Zuiderveen Borgesius, Brent Mittelstadt and Sandra Wachter: “Generative AI (genAI) technologies, while beneficial, risk increasing discrimination by producing demeaning content and subtle biases through inadequate representation of protected groups. This chapter examines these issues, categorizing problematic outputs into three legal categories: discriminatory content; harassment; and legally hard cases like harmful stereotypes. It argues for holding genAI providers and deployers liable for discriminatory outputs and highlights the inadequacy of traditional legal frameworks to address genAI-specific issues. The chapter suggests updating EU laws to mitigate biases in training and input data, mandating testing and auditing, and evolving legislation to enforce standards for bias mitigation and inclusivity as technology advances…(More)”.