Essay by Bhaskar Chakravorti: “The pandemic could have been the moment when AI made good on its promising potential. There was an unprecedented convergence of the need for fast, evidence-based decisions and large-scale problem-solving with datasets spilling out of every country in the world. Instead, AI failed in myriad, specific ways that underscore where this technology is still weak: Bad datasets, embedded bias and discrimination, susceptibility to human error, and a complex, uneven global context all caused critical failures. But, these failures also offer lessons on how we can make AI better: 1) we need to find new ways to assemble comprehensive datasets and merge data from multiple sources, 2) there needs to be more diversity in data sources, 3) incentives must be aligned to ensure greater cooperation across teams and systems, and 4) we need international rules for sharing data…(More)”.
A.I. Is Mastering Language. Should We Trust What It Says?
Steven Johnson at the New York Times: “You are sitting in a comfortable chair by the fire, on a cold winter’s night. Perhaps you have a mug of tea in hand, perhaps something stronger. You open a magazine to an article you’ve been meaning to read. The title suggested a story about a promising — but also potentially dangerous — new technology on the cusp of becoming mainstream, and after reading only a few sentences, you find yourself pulled into the story. A revolution is coming in machine intelligence, the author argues, and we need, as a society, to get better at anticipating its consequences. But then the strangest thing happens: You notice that the writer has, seemingly deliberately, omitted the very last word of the first .
The missing word jumps into your consciousness almost unbidden: ‘‘the very last word of the first paragraph.’’ There’s no sense of an internal search query in your mind; the word ‘‘paragraph’’ just pops out. It might seem like second nature, this filling-in-the-blank exercise, but doing it makes you think of the embedded layers of knowledge behind the thought. You need a command of the spelling and syntactic patterns of English; you need to understand not just the dictionary definitions of words but also the ways they relate to one another; you have to be familiar enough with the high standards of magazine publishing to assume that the missing word is not just a typo, and that editors are generally loath to omit key words in published pieces unless the author is trying to be clever — perhaps trying to use the missing word to make a point about your cleverness, how swiftly a human speaker of English can conjure just the right word.
Before you can pursue that idea further, you’re back into the article, where you find the author has taken you to a building complex in suburban Iowa. Inside one of the buildings lies a wonder of modern technology: 285,000 CPU cores yoked together into one giant supercomputer, powered by solar arrays and cooled by industrial fans. The machines never sleep: Every second of every day, they churn through innumerable calculations, using state-of-the-art techniques in machine intelligence that go by names like ‘‘stochastic gradient descent’’ and ‘‘convolutional neural networks.’’ The whole system is believed to be one of the most powerful supercomputers on the planet.
And what, you may ask, is this computational dynamo doing with all these prodigious resources? Mostly, it is playing a kind of game, over and over again, billions of times a second. And the game is called: Guess what the missing word is.…(More)”.
Decoding human behavior with big data? Critical, constructive input from the decision sciences
Paper by Konstantinos V. Katsikopoulos and Marc C. Canellas: “Big data analytics employs algorithms to uncover people’s preferences and values, and support their decision making. A central assumption of big data analytics is that it can explain and predict human behavior. We investigate this assumption, aiming to enhance the knowledge basis for developing algorithmic standards in big data analytics. First, we argue that big data analytics is by design atheoretical and does not provide process-based explanations of human behavior; thus, it is unfit to support deliberation that is transparent and explainable. Second, we review evidence from interdisciplinary decision science, showing that the accuracy of complex algorithms used in big data analytics for predicting human behavior is not consistently higher than that of simple rules of thumb. Rather, it is lower in situations such as predicting election outcomes, criminal profiling, and granting bail. Big data algorithms can be considered as candidate models for explaining, predicting, and supporting human decision making when they match, in transparency and accuracy, simple, process-based, domain-grounded theories of human behavior. Big data analytics can be inspired by behavioral and cognitive theory….(More)”.
Cities Take the Lead in Setting Rules Around How AI Is Used
Jackie Snow at the Wall Street Journal: “As cities and states roll out algorithms to help them provide services like policing and traffic management, they are also racing to come up with policies for using this new technology.
AI, at its worst, can disadvantage already marginalized groups, adding to human-driven bias in hiring, policing and other areas. And its decisions can often be opaque—making it difficult to tell how to fix that bias, as well as other problems. (The Wall Street Journal discussed calls for regulation of AI, or at least greater transparency about how the systems work, with three experts.)
Cities are looking at a number of solutions to these problems. Some require disclosure when an AI model is used in decisions, while others mandate audits of algorithms, track where AI causes harm or seek public input before putting new AI systems in place.
Here are some ways cities are redefining how AI will work within their borders and beyond.
Explaining the algorithms: Amsterdam and Helsinki
One of the biggest complaints against AI is that it makes decisions that can’t be explained, which can lead to complaints about arbitrary or even biased results.
To let their citizens know more about the technology already in use in their cities, Amsterdam and Helsinki collaborated on websites that document how each city government uses algorithms to deliver services. The registry includes information on the data sets used to train an algorithm, a description of how an algorithm is used, how public servants use the results, the human oversight involved and how the city checks the technology for problems like bias.
Amsterdam has six algorithms fully explained—with a goal of 50 to 100—on the registry website, including how the city’s automated parking-control and trash-complaint reports work. Helsinki, which is only focusing on the city’s most advanced algorithms, also has six listed on its site, with another 10 to 20 left to put up.
“We needed to assess the risk ourselves,” says Linda van de Fliert, an adviser at Amsterdam’s Chief Technology Office. “And we wanted to show the world that it is possible to be transparent.”…(More)” See also AI Localism: The Responsible Use and Design of Artificial Intelligence at the Local Level
Police surveillance and facial recognition: Why data privacy is an imperative for communities of color
Paper by Nicol Turner Lee and Caitlin Chin: “Governments and private companies have a long history of collecting data from civilians, often justifying the resulting loss of privacy in the name of national security, economic stability, or other societal benefits. But it is important to note that these trade-offs do not affect all individuals equally. In fact, surveillance and data collection have disproportionately affected communities of color under both past and current circumstances and political regimes.
From the historical surveillance of civil rights leaders by the Federal Bureau of Investigation (FBI) to the current misuse of facial recognition technologies, surveillance patterns often reflect existing societal biases and build upon harmful and virtuous cycles. Facial recognition and other surveillance technologies also enable more precise discrimination, especially as law enforcement agencies continue to make misinformed, predictive decisions around arrest and detainment that disproportionately impact marginalized populations.
In this paper, we present the case for stronger federal privacy protections with proscriptive guardrails for the public and private sectors to mitigate the high risks that are associated with the development and procurement of surveillance technologies. We also discuss the role of federal agencies in addressing the purposes and uses of facial recognition and other monitoring tools under their jurisdiction, as well as increased training for state and local law enforcement agencies to prevent the unfair or inaccurate profiling of people of color. We conclude the paper with a series of proposals that lean either toward clear restrictions on the use of surveillance technologies in certain contexts, or greater accountability and oversight mechanisms, including audits, policy interventions, and more inclusive technical designs….(More)”
Co-designing algorithms for governance: Ensuring responsible and accountable algorithmic management of refugee camp supplies
Paper by Rianne Dekker et al: “There is increasing criticism on the use of big data and algorithms in public governance. Studies revealed that algorithms may reinforce existing biases and defy scrutiny by public officials using them and citizens subject to algorithmic decisions and services. In response, scholars have called for more algorithmic transparency and regulation. These are useful, but ex post solutions in which the development of algorithms remains a rather autonomous process. This paper argues that co-design of algorithms with relevant stakeholders from government and society is another means to achieve responsible and accountable algorithms that is largely overlooked in the literature. We present a case study of the development of an algorithmic tool to estimate the populations of refugee camps to manage the delivery of emergency supplies. This case study demonstrates how in different stages of development of the tool—data selection and pre-processing, training of the algorithm and post-processing and adoption—inclusion of knowledge from the field led to changes to the algorithm. Co-design supported responsibility of the algorithm in the selection of big data sources and in preventing reinforcement of biases. It contributed to accountability of the algorithm by making the estimations transparent and explicable to its users. They were able to use the tool for fitting purposes and used their discretion in the interpretation of the results. It is yet unclear whether this eventually led to better servicing of refugee camps…(More)”.
Facial Recognition Goes to War
Kashmir Hill at the New York Times: “In the weeks after Russia invaded Ukraine and images of the devastation wrought there flooded the news, Hoan Ton-That, the chief executive of the facial recognition company Clearview AI, began thinking about how he could get involved.
He believed his company’s technology could offer clarity in complex situations in the war.
“I remember seeing videos of captured Russian soldiers and Russia claiming they were actors,” Mr. Ton-That said. “I thought if Ukrainians could use Clearview, they could get more information to verify their identities.”
In early March, he reached out to people who might help him contact the Ukrainian government. One of Clearview’s advisory board members, Lee Wolosky, a lawyer who has worked for the Biden administration, was meeting with Ukrainian officials and offered to deliver a message.
Mr. Ton-That drafted a letter explaining that his app “can instantly identify someone just from a photo” and that the police and federal agencies in the United States used it to solve crimes. That feature has brought Clearview scrutiny over concerns about privacy and questions about racism and other biases within artificial-intelligence systems.
The tool, which can identify a suspect caught on surveillance video, could be valuable to a country under attack, Mr. Ton-That wrote. He said the tool could identify people who might be spies, as well as deceased people, by comparing their faces against Clearview’s database of 20 billion faces from the public web, including from “Russian social sites such as VKontakte.”
Mr. Ton-That decided to offer Clearview’s services to Ukraine for free, as reported earlier by Reuters. Now, less than a month later, the New York-based Clearview has created more than 200 accounts for users at five Ukrainian government agencies, which have conducted more than 5,000 searches. Clearview has also translated its app into Ukrainian.
“It’s been an honor to help Ukraine,” said Mr. Ton-That, who provided emails from officials from three agencies in Ukraine, confirming that they had used the tool. It has identified dead soldiers and prisoners of war, as well as travelers in the country, confirming the names on their official IDs. The fear of spies and saboteurs in the country has led to heightened paranoia.
According to one email, Ukraine’s national police obtained two photos of dead Russian soldiers, which have been viewed by The New York Times, on March 21. One dead man had identifying patches on his uniform, but the other did not, so the ministry ran his face through Clearview’s app…(More)”.
Google is using AI to better detect searches from people in crisis
Article by James Vincent: “In a personal crisis, many people turn to an impersonal source of support: Google. Every day, the company fields searches on topics like suicide, sexual assault, and domestic abuse. But Google wants to do more to direct people to the information they need, and says new AI techniques that better parse the complexities of language are helping.
Specifically, Google is integrating its latest machine learning model, MUM, into its search engine to “more accurately detect a wider range of personal crisis searches.” The company unveiled MUM at its IO conference last year, and has since used it to augment search with features that try to answer questions connected to the original search.
In this case, MUM will be able to spot search queries related to difficult personal situations that earlier search tools could not, says Anne Merritt, a Google product manager for health and information quality.
“MUM is able to help us understand longer or more complex queries like ‘why did he attack me when i said i dont love him,’” Merrit told The Verge. “It may be obvious to humans that this query is about domestic violence, but long, natural-language queries like these are difficult for our systems to understand without advanced AI.”
Other examples of queries that MUM can react to include “most common ways suicide is completed” (a search Merrit says earlier systems “may have previously understood as information seeking”) and “Sydney suicide hot spots” (where, again, earlier responses would have likely returned travel information — ignoring the mention of “suicide” in favor of the more popular query for “hot spots”). When Google detects such crisis searches, it responds with an information box telling users “Help is available,” usually accompanied by a phone number or website for a mental health charity like Samaritans.
In addition to using MUM to respond to personal crises, Google says it’s also using an older AI language model, BERT, to better identify searches looking for explicit content like pornography. By leveraging BERT, Google says it’s “reduced unexpected shocking results by 30%” year-on-year. However, the company was unable to share absolute figures for how many “shocking results” its users come across on average, so while this is a comparative improvement, it gives no indication of how big or small the problem actually is.
Google is keen to tell you that AI is helping the company improve its search products — especially at a time when there’s a building narrative that “Google search is dying.” But integrating this technology comes with its downsides, too.
Many AI experts warn that Google’s increasing use of machine learning language models could surface new problems for the company, like introducing biases and misinformation into search results. AI systems are also opaque, offering engineers restricted insight into how they come to certain conclusions…(More)”.
Is AI Good for the Planet?
Book by Benedetta Brevini: “Artificial intelligence (AI) is presented as a solution to the greatest challenges of our time, from global pandemics and chronic diseases to cybersecurity threats and the climate crisis. But AI also contributes to the climate crisis by running on technology that depletes scarce resources and by relying on data centres that demand excessive energy use.
Is AI Good for the Planet? brings the climate crisis to the centre of debates around AI, exposing its environmental costs and forcing us to reconsider our understanding of the technology. It reveals why we should no longer ignore the environmental problems generated by AI. Embracing a green agenda for AI that puts the climate crisis at centre stage is our urgent priority.
Engaging and passionately written, this book is essential reading for scholars and students of AI, environmental studies, politics, and media studies and for anyone interested in the connections between technology and the environment…(More)”.
Hiding Behind Machines: Artificial Agents May Help to Evade Punishment
Paper by Till Feier, Jan Gogoll & Matthias Uhl: “The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines than by delegating to other people. Our results imply that the availability of artificial agents could provide stronger incentives for decision-makers to delegate sensitive decisions…(More)”.