UK passport photo checker shows bias against dark-skinned women


Maryam Ahmed at BBC News: “Women with darker skin are more than twice as likely to be told their photos fail UK passport rules when they submit them online than lighter-skinned men, according to a BBC investigation.

One black student said she was wrongly told her mouth looked open each time she uploaded five different photos to the government website.

This shows how “systemic racism” can spread, Elaine Owusu said.

The Home Office said the tool helped users get their passports more quickly.

“The indicative check [helps] our customers to submit a photo that is right the first time,” said a spokeswoman.

“Over nine million people have used this service and our systems are improving.

“We will continue to develop and evaluate our systems with the objective of making applying for a passport as simple as possible for all.”

Skin colour

The passport application website uses an automated check to detect poor quality photos which do not meet Home Office rules. These include having a neutral expression, a closed mouth and looking straight at the camera.

BBC research found this check to be less accurate on darker-skinned people.

More than 1,000 photographs of politicians from across the world were fed into the online checker.

The results indicated:

  • Dark-skinned women are told their photos are poor quality 22% of the time, while the figure for light-skinned women is 14%
  • Dark-skinned men are told their photos are poor quality 15% of the time, while the figure for light-skinned men is 9%

Photos of women with the darkest skin were four times more likely to be graded poor quality, than women with the lightest skin….(More)”.

The secret to building a smart city that’s antiracist


Article by Eliza McCullough: “….Instead of a smart city model that extracts from, surveils, and displaces poor people of color, we need a democratic model that allows community members to decide how technological infrastructure operates and to ensure the equitable distribution of benefits. Doing so will allow us to create cities defined by inclusion, shared ownership, and shared prosperity.

In 2016, Barcelona, for example, launched its Digital City Plan, which aims to empower residents with control of technology used in their communities. The document incorporates over 8,000 proposals from residents and includes plans for open source software, government ownership of all ICT infrastructure, and a pilot platform to help citizens maintain control over their personal data. As a result, the city now has free applications that allow residents to easily propose city development ideas, actively participate in city council meetings, and choose how their data is shared.

In the U.S., we need a framework for tech sovereignty that incorporates a racial equity approach: In a racist society, race neutrality facilitates continued exclusion and exploitation of people of color. Digital Justice Lab in Toronto illustrates one critical element of this kind of approach: access to information. In 2018, the organization gave community groups a series of grants to hold public events that shared resources and information about digital rights. Their collaborative approach intentionally focuses on the specific needs of people of color and other marginalized groups.

The turn toward intensified surveillance infrastructure in the midst of the coronavirus outbreak makes the need to adopt such practices all the more crucial. Democratic tech models that uplift marginalized populations provide us the chance to build a city that is just and open to everyone….(More)”.

Indigenous Data Sovereignty and Policy


Book edited by Maggie WalterTahu KukutaiStephanie Russo Carroll and Desi Rodriguez-Lonebear: “This book examines how Indigenous Peoples around the world are demanding greater data sovereignty, and challenging the ways in which governments have historically used Indigenous data to develop policies and programs.

In the digital age, governments are increasingly dependent on data and data analytics to inform their policies and decision-making. However, Indigenous Peoples have often been the unwilling targets of policy interventions and have had little say over the collection, use and application of data about them, their lands and cultures. At the heart of Indigenous Peoples’ demands for change are the enduring aspirations of self-determination over their institutions, resources, knowledge and information systems.

With contributors from Australia, Aotearoa New Zealand, North and South America and Europe, this book offers a rich account of the potential for Indigenous data sovereignty to support human flourishing and to protect against the ever-growing threats of data-related risks and harms….(More)”.

AI technologies — like police facial recognition — discriminate against people of colour


Jane Bailey et al at The Conversation: “…In his game-changing 1993 book, The Panoptic Sort, scholar Oscar Gandy warned that “complex technology [that] involves the collection, processing and sharing of information about individuals and groups that is generated through their daily lives … is used to coordinate and control their access to the goods and services that define life in the modern capitalist economy.” Law enforcement uses it to pluck suspects from the general public, and private organizations use it to determine whether we have access to things like banking and employment.

Gandy prophetically warned that, if left unchecked, this form of “cybernetic triage” would exponentially disadvantage members of equality-seeking communities — for example, groups that are racialized or socio-economically disadvantaged — both in terms of what would be allocated to them and how they might come to understand themselves.

Some 25 years later, we’re now living with the panoptic sort on steroids. And examples of its negative effects on equality-seeking communities abound, such as the false identification of Williams.

Pre-existing bias

This sorting using algorithms infiltrates the most fundamental aspects of everyday life, occasioning both direct and structural violence in its wake.

The direct violence experienced by Williams is immediately evident in the events surrounding his arrest and detention, and the individual harms he experienced are obvious and can be traced to the actions of police who chose to rely on the technology’s “match” to make an arrest. More insidious is the structural violence perpetrated through facial recognition technology and other digital technologies that rate, match, categorize and sort individuals in ways that magnify pre-existing discriminatory patterns.

Structural violence harms are less obvious and less direct, and cause injury to equality-seeking groups through systematic denial to power, resources and opportunity. Simultaneously, it increases direct risk and harm to individual members of those groups.

Predictive policing uses algorithmic processing of historical data to predict when and where new crimes are likely to occur, assigns police resources accordingly and embeds enhanced police surveillance into communities, usually in lower-income and racialized neighbourhoods. This increases the chances that any criminal activity — including less serious criminal activity that might otherwise prompt no police response — will be detected and punished, ultimately limiting the life chances of the people who live within that environment….(More)”.

An Introduction to Ethics in Robotics and AI


Open access book by Christoph Bartneck, Christoph Lütge, Alan Wagner and Sean Welsh: “This book provides an introduction into the ethics of robots and artificial intelligence. The book was written with university students, policy makers, and professionals in mind but should be accessible for most adults. The book is meant to provide balanced and, at times, conflicting viewpoints as to the benefits and deficits of AI through the lens of ethics. As discussed in the chapters that follow, ethical questions are often not cut and dry. Nations, communities, and individuals may have unique and important perspectives on these topics that should be heard and considered. While the voices that compose this book are our own, we have attempted to represent the views of the broader AI, robotics, and ethics communities.

This book provides an introduction into the ethics of robots and artificial intelligence. The book was written with university students, policy makers, and professionals in mind but should be accessible for most adults. The book is meant to provide balanced and, at times, conflicting viewpoints as to the benefits and deficits of AI through the lens of ethics. As discussed in the chapters that follow, ethical questions are often not cut and dry. Nations, communities, and individuals may have unique and important perspectives on these topics that should be heard and considered. While the voices that compose this book are our own, we have attempted to represent the views of the broader AI, robotics, and ethics communities….(More)”.

The Rise of the Data Poor: The COVID-19 Pandemic Seen From the Margins


Essay by Stefania Milan and Emiliano Treré: “Quantification is central to the narration of the COVID-19 pandemic. Numbers determine the existence of the problem and affect our ability to care and contribute to relief efforts. Yet many communities at the margins, including many areas of the Global South, are virtually absent from this number-based narration of the pandemic. This essay builds on critical data studies to warn against the universalization of problems, narratives, and responses to the virus. To this end, it explores two types of data gaps and the corresponding “data poor.” The first gap concerns the data poverty perduring in low-income countries and jeopardizing their ability to adequately respond to the pandemic. The second affects vulnerable populations within a variety of geopolitical and socio-political contexts, whereby data poverty constitutes a dangerous form of invisibility which perpetuates various forms of inequality. But, even during the pandemic, the disempowered manage to create innovative forms of solidarity from below that partially mitigate the negative effects of their invisibility….(More)”.

Data Justice and COVID-19: Global Perspectives


Book edited by edited by Linnet Taylor, Aaron Martin, Gargi Sharma and Shazade Jameson: “In early 2020, as the COVID-19 pandemic swept the world and states of emergency were declared by one country after another, the global technology sector—already equipped with unprecedented wealth, power, and influence—mobilised to seize the opportunity. This collection is an account of what happened next and captures the emergent conflicts and responses around the world. The essays provide a global perspective on the implications of these developments for justice: they make it possible to compare how the intersection of state and corporate power—and the way that power is targeted and exercised—confronts, and invites resistance from, civil society in countries worldwide.

This edited volume captures the technological response to the pandemic in 33 countries, accompanied by nine thematic reflections, and reflects the unfolding of the first wave of the pandemic.

This book can be read as a guide to the landscape of technologies deployed during the pandemic and also be used to compare individual country strategies. It will prove useful as a tool for teaching and learning in various disciplines and as a reference point for activists and analysts interested in issues of data justice.

The essays interrogate these technologies and the political, legal, and regulatory structures that determine how they are applied. In doing so,the book exposes the workings of state technological power to critical assessment and contestation….(More)”

When Mini-Publics and Maxi-Publics Coincide: Ireland’s National Debate on Abortion


Paper by David Farrell et al: “Ireland’s Citizens’ Assembly (CA) of 2016–18 was tasked with making recommendations on abortion. This paper shows that from the outset its members were in large part in favour of the liberalisation of abortion (though a fair proportion were undecided), that over the course of its deliberations the CA as a whole moved in a more liberal direction on the issue, but that its position was largely reflected in the subsequent referendum vote by the population as a whole….(More)”

Can You Fight Systemic Racism With Design?


Reboot’s “Design With” podcast with Antionette Carroll: “What began as a 24-hour design challenge addressing racial inequality in Ferguson, MO has since grown into a powerful organization fighting inequity with its own brand of collaborative design. Antionette Carroll, founder of Creative Reaction Lab, speaks about Equity-Centered Community Design—and how Black and Latinx youth are using design as their tool of choice to dismantle the very systems designed to exclude them….(More)”.

Interventions to mitigate the racially discriminatory impacts of emerging tech including AI


Joint Civil Society Statement: “As widespread recent protests have highlighted, racial inequality remains an urgent and devastating issue around the world, and this is as true in the context of technology as it is everywhere else. In fact, it may be more so, as algorithmic technologies based on big data are deployed at previously unimaginable scale, reproducing the discriminatory systems that build and govern them.

The undersigned organizations welcome the publication of the report “Racial discrimination and emerging digital technologies: a human rights analysis,” by Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, E. Tendayi Achiume, and wish to underscore the importance and timeliness of a number of the recommendations made therein:

  1. Technologies that have had or will have significant racially discriminatory impacts should be banned outright.
    While incremental regulatory approaches may be appropriate in some contexts, where a technology is demonstrably likely to cause racially discriminatory harm, it should not be deployed until that harm can be prevented. Moreover, certain technologies may always have disparate racial impacts, no matter how much their accuracy can be improved. In the present moment, racially discriminatory technologies include facial and affect recognition technology and so-called predictive analytics. We support Special Rapporteur Achiume’s call for mandatory human rights impact assessments as a prerequisite for the adoption of new technologies. We also believe that where such assessments reveal that a technology has a high likelihood of deleterious racially disparate impacts, states should prevent its use through a ban or moratorium. We join the Special Rapporteur in welcoming recent municipal bans, for example, on the use of facial recognition technology, and encourage national governments to adopt similar policies.  Correspondingly, we reiterate our support for states’ imposition of an immediate moratorium on the trade and use of privately developed surveillance tools until such time as states enact appropriate safeguards, and congratulate Special Rapporteur Achiume on joining that call.
  2. Gender mainstreaming and representation along racial, national and other intersecting identities requires radical improvement at all levels of the tech sector.
  3. Technologists cannot solve political, social, and economic problems without the input of domain experts and those personally impacted.
  4. Access to technology is as urgent an issue of racial discrimination as inequity in the design of technologies themselves.
  5. Representative and disaggregated data is a necessary, if not sufficient, condition for racial equity in emerging digital technologies, but it must be collected and managed equitably as well.
  6. States as well as corporations must provide remedies for racial discrimination, including reparations.… (More)”.