Police surveillance and facial recognition: Why data privacy is an imperative for communities of color


Paper by Nicol Turner Lee and Caitlin Chin: “Governments and private companies have a long history of collecting data from civilians, often justifying the resulting loss of privacy in the name of national security, economic stability, or other societal benefits. But it is important to note that these trade-offs do not affect all individuals equally. In fact, surveillance and data collection have disproportionately affected communities of color under both past and current circumstances and political regimes.

From the historical surveillance of civil rights leaders by the Federal Bureau of Investigation (FBI) to the current misuse of facial recognition technologies, surveillance patterns often reflect existing societal biases and build upon harmful and virtuous cycles. Facial recognition and other surveillance technologies also enable more precise discrimination, especially as law enforcement agencies continue to make misinformed, predictive decisions around arrest and detainment that disproportionately impact marginalized populations.

In this paper, we present the case for stronger federal privacy protections with proscriptive guardrails for the public and private sectors to mitigate the high risks that are associated with the development and procurement of surveillance technologies. We also discuss the role of federal agencies in addressing the purposes and uses of facial recognition and other monitoring tools under their jurisdiction, as well as increased training for state and local law enforcement agencies to prevent the unfair or inaccurate profiling of people of color. We conclude the paper with a series of proposals that lean either toward clear restrictions on the use of surveillance technologies in certain contexts, or greater accountability and oversight mechanisms, including audits, policy interventions, and more inclusive technical designs….(More)”

Co-designing algorithms for governance: Ensuring responsible and accountable algorithmic management of refugee camp supplies


Paper by Rianne Dekker et al: “There is increasing criticism on the use of big data and algorithms in public governance. Studies revealed that algorithms may reinforce existing biases and defy scrutiny by public officials using them and citizens subject to algorithmic decisions and services. In response, scholars have called for more algorithmic transparency and regulation. These are useful, but ex post solutions in which the development of algorithms remains a rather autonomous process. This paper argues that co-design of algorithms with relevant stakeholders from government and society is another means to achieve responsible and accountable algorithms that is largely overlooked in the literature. We present a case study of the development of an algorithmic tool to estimate the populations of refugee camps to manage the delivery of emergency supplies. This case study demonstrates how in different stages of development of the tool—data selection and pre-processing, training of the algorithm and post-processing and adoption—inclusion of knowledge from the field led to changes to the algorithm. Co-design supported responsibility of the algorithm in the selection of big data sources and in preventing reinforcement of biases. It contributed to accountability of the algorithm by making the estimations transparent and explicable to its users. They were able to use the tool for fitting purposes and used their discretion in the interpretation of the results. It is yet unclear whether this eventually led to better servicing of refugee camps…(More)”.

Facial Recognition Goes to War


Kashmir Hill at the New York Times: “In the weeks after Russia invaded Ukraine and images of the devastation wrought there flooded the news, Hoan Ton-That, the chief executive of the facial recognition company Clearview AI, began thinking about how he could get involved.

He believed his company’s technology could offer clarity in complex situations in the war.

“I remember seeing videos of captured Russian soldiers and Russia claiming they were actors,” Mr. Ton-That said. “I thought if Ukrainians could use Clearview, they could get more information to verify their identities.”

In early March, he reached out to people who might help him contact the Ukrainian government. One of Clearview’s advisory board members, Lee Wolosky, a lawyer who has worked for the Biden administration, was meeting with Ukrainian officials and offered to deliver a message.

Mr. Ton-That drafted a letter explaining that his app “can instantly identify someone just from a photo” and that the police and federal agencies in the United States used it to solve crimes. That feature has brought Clearview scrutiny over concerns about privacy and questions about racism and other biases within artificial-intelligence systems.

The tool, which can identify a suspect caught on surveillance video, could be valuable to a country under attack, Mr. Ton-That wrote. He said the tool could identify people who might be spies, as well as deceased people, by comparing their faces against Clearview’s database of 20 billion faces from the public web, including from “Russian social sites such as VKontakte.”

Mr. Ton-That decided to offer Clearview’s services to Ukraine for free, as reported earlier by Reuters. Now, less than a month later, the New York-based Clearview has created more than 200 accounts for users at five Ukrainian government agencies, which have conducted more than 5,000 searches. Clearview has also translated its app into Ukrainian.

“It’s been an honor to help Ukraine,” said Mr. Ton-That, who provided emails from officials from three agencies in Ukraine, confirming that they had used the tool. It has identified dead soldiers and prisoners of war, as well as travelers in the country, confirming the names on their official IDs. The fear of spies and saboteurs in the country has led to heightened paranoia.

According to one email, Ukraine’s national police obtained two photos of dead Russian soldiers, which have been viewed by The New York Times, on March 21. One dead man had identifying patches on his uniform, but the other did not, so the ministry ran his face through Clearview’s app…(More)”.

Google is using AI to better detect searches from people in crisis


Article by James Vincent: “In a personal crisis, many people turn to an impersonal source of support: Google. Every day, the company fields searches on topics like suicide, sexual assault, and domestic abuse. But Google wants to do more to direct people to the information they need, and says new AI techniques that better parse the complexities of language are helping.

Specifically, Google is integrating its latest machine learning model, MUM, into its search engine to “more accurately detect a wider range of personal crisis searches.” The company unveiled MUM at its IO conference last year, and has since used it to augment search with features that try to answer questions connected to the original search.

In this case, MUM will be able to spot search queries related to difficult personal situations that earlier search tools could not, says Anne Merritt, a Google product manager for health and information quality.

“MUM is able to help us understand longer or more complex queries like ‘why did he attack me when i said i dont love him,’” Merrit told The Verge. “It may be obvious to humans that this query is about domestic violence, but long, natural-language queries like these are difficult for our systems to understand without advanced AI.”

Other examples of queries that MUM can react to include “most common ways suicide is completed” (a search Merrit says earlier systems “may have previously understood as information seeking”) and “Sydney suicide hot spots” (where, again, earlier responses would have likely returned travel information — ignoring the mention of “suicide” in favor of the more popular query for “hot spots”). When Google detects such crisis searches, it responds with an information box telling users “Help is available,” usually accompanied by a phone number or website for a mental health charity like Samaritans.

In addition to using MUM to respond to personal crises, Google says it’s also using an older AI language model, BERT, to better identify searches looking for explicit content like pornography. By leveraging BERT, Google says it’s “reduced unexpected shocking results by 30%” year-on-year. However, the company was unable to share absolute figures for how many “shocking results” its users come across on average, so while this is a comparative improvement, it gives no indication of how big or small the problem actually is.

Google is keen to tell you that AI is helping the company improve its search products — especially at a time when there’s a building narrative that “Google search is dying.” But integrating this technology comes with its downsides, too.

Many AI experts warn that Google’s increasing use of machine learning language models could surface new problems for the company, like introducing biases and misinformation into search results. AI systems are also opaque, offering engineers restricted insight into how they come to certain conclusions…(More)”.

Is AI Good for the Planet?


Book by Benedetta Brevini: “Artificial intelligence (AI) is presented as a solution to the greatest challenges of our time, from global pandemics and chronic diseases to cybersecurity threats and the climate crisis. But AI also contributes to the climate crisis by running on technology that depletes scarce resources and by relying on data centres that demand excessive energy use.

Is AI Good for the Planet? brings the climate crisis to the centre of debates around AI, exposing its environmental costs and forcing us to reconsider our understanding of the technology. It reveals why we should no longer ignore the environmental problems generated by AI. Embracing a green agenda for AI that puts the climate crisis at centre stage is our urgent priority.

Engaging and passionately written, this book is essential reading for scholars and students of AI, environmental studies, politics, and media studies and for anyone interested in the connections between technology and the environment…(More)”.

Hiding Behind Machines: Artificial Agents May Help to Evade Punishment


Paper by Till Feier, Jan Gogoll & Matthias Uhl: “The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines than by delegating to other people. Our results imply that the availability of artificial agents could provide stronger incentives for decision-makers to delegate sensitive decisions…(More)”.

Technology rules? The advent of new technologies in the justice system


Report by The Justice and Home Affairs Committee (House of Lords): “In recent years, and without many of us realising it, Artificial Intelligence has begun to permeate every aspect of our personal and professional lives. We live in a world of big data; more and more decisions in society are being taken by machines using algorithms built from that data, be it in healthcare, education, business, or consumerism. Our Committee has limited its investigation to only one area–how these advanced technologies are used in our justice system. Algorithms are being used to improve crime detection, aid the security categorisation of prisoners, streamline entry clearance processes at our borders and generate new insights that feed into the entire criminal justice pipeline.

We began our work on the understanding that Artificial Intelligence (AI), used correctly, has the potential to improve people’s lives through greater efficiency, improved productivity. and in finding solutions to often complex problems. But while acknowledging the many benefits, we were taken aback by the proliferation of Artificial Intelligence tools potentially being used without proper oversight, particularly by police forces across the country. Facial recognition may be the best known of these new technologies but in reality there are many more already in use, with more being developed all the time.

When deployed within the justice system, AI technologies have serious implications for a person’s human rights and civil liberties. At what point could someone be imprisoned on the basis of technology that cannot be explained? Informed scrutiny is therefore essential to ensure that any new tools deployed in this sphere are safe, necessary, proportionate, and effective. This scrutiny is not happening. Instead, we uncovered a landscape, a new Wild West, in which new technologies are developing at a pace that public awareness, government and legislation have not kept up with.
Public bodies and all 43 police forces are free to individually commission whatever tools they like or buy them from companies eager to get in on the burgeoning AI market. And the market itself is worryingly opaque. We were told that public bodies often do not know much about the systems they are buying and will be implementing, due to the seller’s insistence on commercial confidentiality–despite the fact that many of these systems will be harvesting, and relying on, data from the general public.
This is particularly concerning in light of evidence we heard of dubious selling practices and claims made by vendors as to their products’ effectiveness which are often untested and unproven…(More)”.

Artificial Intelligence for Children


WEF Toolkit: “Children and youth are surrounded by AI in many of the products they use in their daily lives, from social media to education technology, video games, smart toys and speakers. AI determines the videos children watch online, their curriculum as they learn, and the way they play and interact with others.

This toolkit, produced by a diverse team of youth, technologists, academics and business leaders, is designed to help companies develop trustworthy artificial intelligence (AI) for children and youth and to help parents, guardians, children and youth responsibly buy and safely use AI products.

AI can be used to educate and empower children and youth and have a positive impact on society. But children and youth can be especially vulnerable to the potential risks posed by AI, including bias, cybersecurity and lack of accessibility. AI must be designed inclusively to respect the rights of the child user. Child-centric design can protect children and youth from the potential risks posed by the technology.

AI technology must be created so that it is both innovative and responsible. Responsible AI is safe, ethical, transparent, fair, accessible and inclusive. Designing responsible and trusted AI is good for consumers, businesses and society. Parents, guardians and adults all have the responsibility to carefully select ethically designed AI products and help children use them safely.

What is at stake? AI will determine the future of play, childhood, education and societies. Children and youth represent the future, so everything must be done to support them to use AI responsibly and address the challenges of the future.

This toolkit aims to help responsibly design, consume and use AI. It is designed to help companies, designers, parents, guardians, children and youth make sure that AI respects the rights of children and has a positive impact in their lives…(More)”.

Humans in the Loop


Paper by Rebecca Crootof, Margot E. Kaminski and W. Nicholson Price II: “From lethal drones to cancer diagnostics, complex and artificially intelligent algorithms are increasingly integrated into decisionmaking that affects human lives, raising challenging questions about the proper allocation of decisional authority between humans and machines. Regulators commonly respond to these concerns by putting a “human in the loop”: using law to require or encourage including an individual within an algorithmic decisionmaking process.

Drawing on our distinctive areas of expertise with algorithmic systems, we take a bird’s eye view to make three generalizable contributions to the discourse. First, contrary to the popular narrative, the law is already profoundly (and problematically) involved in governing algorithmic systems. Law may explicitly require or prohibit human involvement and law may indirectly encourage or discourage human involvement, all without regard to what we know about the strengths and weaknesses of human and algorithmic decisionmakers and the particular quirks of hybrid human-machine systems. Second, we identify “the MABA-MABA trap,” wherein regulators are tempted to address a panoply of concerns by “slapping a human in it” based on presumptions about what humans and algorithms are respectively better at doing, often without realizing that the new hybrid system needs its own distinct regulatory interventions. Instead, we suggest that regulators should focus on what they want the human to do—what role the human is meant to play—and design regulations to allow humans to play these roles successfully. Third, borrowing concepts from systems engineering and existing law regulating railroads, nuclear reactors, and medical devices, we highlight lessons for regulating humans in the loop as well as alternative means of regulating human-machine systems going forward….(More)”.

The ethical imperative to identify and address data and intelligence asymmetries


Article by Stefaan Verhulst in AI & Society: “The insight that knowledge, resulting from having access to (privileged) information or data, is power is more relevant today than ever before. The data age has redefined the very notion of knowledge and information (as well as power), leading to a greater reliance on dispersed and decentralized datasets as well as to new forms of innovation and learning, such as artificial intelligence (AI) and machine learning (ML). As Thomas Piketty (among others) has shown, we live in an increasingly stratified world, and our society’s socio-economic asymmetries are often grafted onto data and information asymmetries. As we have documented elsewhere, data access is fundamentally linked to economic opportunity, improved governance, better science and citizen empowerment. The need to address data and information asymmetries—and their resulting inequalities of political and economic power—is therefore emerging as among the most urgent ethical challenges of our era, yet often not recognized as such.

Even as awareness grows of this imperative, society and policymakers lag in their understanding of the underlying issue. Just what are data asymmetries? How do they emerge, and what form do they take? And how do data asymmetries accelerate information and other asymmetries? What forces and power structures perpetuate or deepen these asymmetries, and vice versa? I argue that it is a mistake to treat this problem as homogenous. In what follows, I suggest the beginning of a taxonomy of asymmetries. Although closely related, each one emerges from a different set of contingencies, and each is likely to require different policy remedies. The focus of this short essay is to start outlining these different types of asymmetries. Further research could deepen and expand the proposed taxonomy as well help define solutions that are contextually appropriate and fit for purpose….(More)”.