The challenges of protecting data and rights in the metaverse


Article by Urvashi Aneja: “Virtual reality systems work by capturing extensive biological data about a user’s body, including pupil dilation, eye movement, facial expressions, skin temperature, and emotional responses to stimuli. Spending just 20 minutes in a VR simulation leaves nearly 2 million unique recordings of body language.

Existing data protection frameworks are woefully inadequate for dealing with the privacy implications of these technologies. Data collection is involuntary and continuous, rendering the notion of consent almost impossible. Research also shows that five minutes of VR data, with all personally identifiable information stripped, could be correctly identified using a machine learning algorithm with 95% accuracy. This type of data isn’t covered by most biometrics laws.

But a lot more than individual privacy is at stake. Such data will enable what human rights lawyer Brittan Heller has called “biometric psychography” referring to the gathering and use of biological data to reveal intimate details about a user’s likes, dislikes, preferences, and interests. In VR experiences, it is not only a user’s outward behavior that is captured, but also their emotional reactions to specific situations, through features such as pupil dilation or change in facial expressions….(More)”

Artificial intelligence is creating a new colonial world order


Series by  Karen Hao: “…Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. European colonialism, they say, was characterized by the violent capture of land, extraction of resources, and exploitation of people—for example, through slavery—for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence today, it is now using other, more insidious means to enrich the wealthy and powerful at the great expense of the poor….

MIT Technology Review’s new AI Colonialism series, which will be publishing throughout this week, digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid.

In part two, we head to Venezuela, where AI data-labeling firms found cheap and desperate workers amid a devastating economic crisis, creating a new model of labor exploitation. The series also looks at ways to move away from these dynamics. In part three, we visit ride-hailing drivers in Indonesia who, by building power through community, are learning to resist algorithmic control and fragmentation. In part four, we end in Aotearoa, the Maori name for New Zealand, where an Indigenous couple are wresting back control of their community’s data to revitalize its language.

Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.

That is ultimately the aim of this series: to broaden the view of AI’s impact on society so as to begin to figure out how things could be different. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way….(More)”.

How Democracies Spy on Their Citizens 


Ronan Farrow at the New Yorker: “…Commercial spyware has grown into an industry estimated to be worth twelve billion dollars. It is largely unregulated and increasingly controversial. In recent years, investigations by the Citizen Lab and Amnesty International have revealed the presence of Pegasus on the phones of politicians, activists, and dissidents under repressive regimes. An analysis by Forensic Architecture, a research group at the University of London, has linked Pegasus to three hundred acts of physical violence. It has been used to target members of Rwanda’s opposition party and journalists exposing corruption in El Salvador. In Mexico, it appeared on the phones of several people close to the reporter Javier Valdez Cárdenas, who was murdered after investigating drug cartels. Around the time that Prince Mohammed bin Salman of Saudi Arabia approved the murder of the journalist Jamal Khashoggi, a longtime critic, Pegasus was allegedly used to monitor phones belonging to Khashoggi’s associates, possibly facilitating the killing, in 2018. (Bin Salman has denied involvement, and NSO said, in a statement, “Our technology was not associated in any way with the heinous murder.”) Further reporting through a collaboration of news outlets known as the Pegasus Project has reinforced the links between NSO Group and anti-democratic states. But there is evidence that Pegasus is being used in at least forty-five countries, and it and similar tools have been purchased by law-enforcement agencies in the United States and across Europe. Cristin Flynn Goodwin, a Microsoft executive who has led the company’s efforts to fight spyware, told me, “The big, dirty secret is that governments are buying this stuff—not just authoritarian governments but all types of governments.”…(More)”.

Police surveillance and facial recognition: Why data privacy is an imperative for communities of color


Paper by Nicol Turner Lee and Caitlin Chin: “Governments and private companies have a long history of collecting data from civilians, often justifying the resulting loss of privacy in the name of national security, economic stability, or other societal benefits. But it is important to note that these trade-offs do not affect all individuals equally. In fact, surveillance and data collection have disproportionately affected communities of color under both past and current circumstances and political regimes.

From the historical surveillance of civil rights leaders by the Federal Bureau of Investigation (FBI) to the current misuse of facial recognition technologies, surveillance patterns often reflect existing societal biases and build upon harmful and virtuous cycles. Facial recognition and other surveillance technologies also enable more precise discrimination, especially as law enforcement agencies continue to make misinformed, predictive decisions around arrest and detainment that disproportionately impact marginalized populations.

In this paper, we present the case for stronger federal privacy protections with proscriptive guardrails for the public and private sectors to mitigate the high risks that are associated with the development and procurement of surveillance technologies. We also discuss the role of federal agencies in addressing the purposes and uses of facial recognition and other monitoring tools under their jurisdiction, as well as increased training for state and local law enforcement agencies to prevent the unfair or inaccurate profiling of people of color. We conclude the paper with a series of proposals that lean either toward clear restrictions on the use of surveillance technologies in certain contexts, or greater accountability and oversight mechanisms, including audits, policy interventions, and more inclusive technical designs….(More)”

Researcher Helps Create Big Data ‘Early Alarm’ for Ukraine Abuses


Article by Chris Carroll: From searing images of civilians targeted by shelling to detailed accounts of sick children and their families fleeing nearby fighting to seek medical care, journalists have created a kaleidoscopic view of the suffering that has engulfed Ukraine since Russia invaded—but the news media can’t be everywhere.

Social media practically can be, however, and a University of Maryland researcher is part of a U.S.-Ukrainian multi-institutional team that’s harvesting data from Twitter and analyzing it with machine-learning algorithms. The result is a real-time system that provides a running account of what people in Ukraine are facing, constructed from their own accounts.

The project, Data for Ukraine, has been running for about three weeks, and has shown itself able to surface important events a few hours ahead of Western or even Ukrainian media sources. It focuses on four areas: humanitarian needs, displaced people, civilian resistance and human rights violations. In addition to simply showing spikes of credible tweets about certain subjects the team is tracking, the system also geolocates tweets—essentially mapping where events take place.

“It’s an early alarm system for human rights abuses,” said Ernesto Calvo, professor of government and politics and director of UMD’s Inter-Disciplinary Lab for Computational Social Science. “For it to work, we need to know two basic things: what is happening or being reported, and who is reporting those things.”

Calvo and his lab focus on the second of those two requirements, and constructed a “community detection” system to identify key nodes of Twitter users from which to use data. Other team members with expertise in Ukrainian society and politics spotted him a list of about 400 verified users who actively tweet on relevant topics. Then Calvo, who honed his approach analyzing social media from political and environmental crises in Latin America, and his team expanded and deepened the collection, drawing on connections and followers of the initial list so that millions of tweets per day now feed the system.

Nearly half of the captured tweets are in Ukrainian, 30% are in English and 20% are in Russian. Knowing who to exclude—accounts started the day before the invasion, for instance, or with few long-term connections—is key, Calvo said…(More)”.

Public Meetings Thwart Housing Reform Where It Is Needed Most


Interview with Katherine Levine Einstein by Jake Blumgart: “Public engagement can have downsides. Neighborhood participation in the housing permitting process makes existing political inequalities worse, limits housing supply and contributes to the affordability crisis….

In 2019, Katherine Levine Einstein and her co-authors at Boston University produced the first in-depth study of this dynamic, Neighborhood Defenders, providing a unique insight into how hyper-local democracy can produce warped land-use outcomes. Governing talked with her about the politics of delay, what kind of regulations hamper growth and when community meetings can still be an effective means of public feedback.

Governing: What could be wrong with a neighborhood meeting? Isn’t this democracy in its purest form? 

Katherine Levine Einstein: In this book, rather than look at things in their ideal form, we actually evaluated how they are working on the ground. We bring data to the question of whether neighborhood meetings are really providing community voice. One of the reasons that we think of them as this important cornerstone of American democracy is because they are supposedly providing us perspectives that are not widely heard, really amplifying the voices of neighborhood residents.

What we’re able to do in the book is to really bring home the idea that the people who are showing up are not actually representative of their broader communities and they are unrepresentative in really important ways. They’re much more likely to be opposed to new housing, and they’re demographically privileged on a number of dimensions….

What we find happens in practice is that even in less privileged places, these neighborhood meetings are actually amplifying more privileged voices. We study a variety of more disadvantaged places and what the dynamics of these meetings look like. The principles that hold in more affluent communities still play out in these less privileged places. You still hear from voices that are overwhelmingly opposed to new housing. The voices that are heard are much more likely to be homeowners, white and older…(More)”.

Decolonize Data


Essay by Nithya Ramanathan, Jim Fruchterman, Amy Fowler & Gabriele Carotti-Sha: “The social sector aims to empower communities with tools and knowledge to effect change for themselves, because community-driven change is more likely to drive sustained impact than attempts to force change from the outside. This commitment should include data, which is increasingly essential for generating social impact. Today the effective implementation and continuous improvement of social programs all but requires the collection and analysis of data.

But all too often, social sector practitioners, including researchers, extract data from individuals, communities, and countries for their own purposes, and do not even make it available to them, let alone enable them to draw their own conclusions from it. With data flows the power to make informed decisions.

It is therefore counterproductive, and painfully ironic, that we have ignored our fundamental principles when it comes to data. We see donors and leading practitioners making a sincere move to decolonize aid. However, if we are truly committed to decolonizing the practices in aid, then we must also examine the ownership and flow of data.

Decolonizing data would not only help ensure that the benefits of data accrue directly to the rightful data owners but also open up more intentional data sharing driven by the rightful data owners—the communities we claim to empower…(More)”.

How Native Americans Are Trying to Debug A.I.’s Biases


Alex V. Cipolle in The New York Times: “In September 2021, Native American technology students in high school and college gathered at a conference in Phoenix and were asked to create photo tags — word associations, essentially — for a series of images.

One image showed ceremonial sage in a seashell; another, a black-and-white photograph circa 1884, showed hundreds of Native American children lined up in uniform outside the Carlisle Indian Industrial School, one of the most prominent boarding schools run by the American government during the 19th and 20th centuries.

For the ceremonial sage, the students chose the words “sweetgrass,” “sage,” “sacred,” “medicine,” “protection” and “prayers.” They gave the photo of the boarding school tags with a different tone: “genocide,” “tragedy,” “cultural elimination,” “resiliency” and “Native children.”

The exercise was for the workshop Teaching Heritage to Artificial Intelligence Through Storytelling at the annual conference for the American Indian Science and Engineering Society. The students were creating metadata that could train a photo recognition algorithm to understand the cultural meaning of an image.

The workshop presenters — Chamisa Edmo, a technologist and citizen of the Navajo Nation, who is also Blackfeet and Shoshone-Bannock; Tracy Monteith, a senior Microsoft engineer and member of the Eastern Band of Cherokee Indians; and the journalist Davar Ardalan — then compared these answers with those produced by a major image recognition app.

For the ceremonial sage, the app’s top tag was “plant,” but other tags included “ice cream” and “dessert.” The app tagged the school image with “human,” “crowd,” “audience” and “smile” — the last a particularly odd descriptor, given that few of the children are smiling.

The image recognition app botched its task, Mr. Monteith said, because it didn’t have proper training data. Ms. Edmo explained that tagging results are often “outlandish” and “offensive,” recalling how one app identified a Native American person wearing regalia as a bird. And yet similar image recognition apps have identified with ease a St. Patrick’s Day celebration, Ms. Ardalan noted as an example, because of the abundance of data on the topic….(More)”.

The first answer for food insecurity: data sovereignty


Interview by Brian Oaster: “For two years now, the COVID-19 pandemic has exacerbated almost every structural inequity in Indian Country. Food insecurity is high on that list.

Like other inequities, it’s an intergenerational product of dispossession and congressional underfunding — nothing new for Native communities. What is new, however, is the ability of Native organizations and sovereign nations to collectively study and understand the needs of the many communities facing the issue. The age of data sovereignty has (finally) arrived.

To that end, the Native American Agriculture Fund (NAAF) partnered with the Indigenous Food and Agricultural Initiative (INAI) and the Food Research and Action Center (FRAC) to produce a special report, Reimagining Hunger Responses in Times of Crisis, which was released in January.

According to the report, 48% of the more than 500 Native respondents surveyed across the country agreed that “sometimes or often during the pandemic the food their household bought just didn’t last, and they didn’t have money to get more.” Food security and access were especially low among Natives with young children or elders at home, people in fair to poor health and those whose employment was disrupted by the pandemic. “Native households experience food insecurity at shockingly higher rates than the general public and white households,” the report noted.

It also detailed how, throughout the pandemic, Natives overwhelmingly turned to their tribal governments and communities — as opposed to state or federal programs — for help. State and federal programs, like the Supplement Nutrition Assistance Program, or SNAP, don’t always mesh with the needs of rural reservations. A benefits card is useless if there’s no food store in your community. In response, tribes and communities came together and worked to get their people fed.

Understanding how and why will help pave the way for legislation that empowers tribes to provide for their own people, by using federal funding to build local agricultural infrastructure, for instance, instead of relying on assistance programs that don’t always work. HCN spoke with the Native American Agriculture Fund’s CEO, Toni Stanger-McLaughlin (Colville), to find out more…(More)”.

Towards a Standard for Identifying and Managing Bias in Artificial Intelligence


NIST Report: “As individuals and communities interact in and with an environment that is increasingly virtual they are often vulnerable to the commodification of their digital exhaust. Concepts and behavior that are ambiguous in nature are captured in this environment, quantified, and used to categorize, sort, recommend, or make decisions about people’s lives. While many organizations seek to utilize this information in a responsible manner, biases remain endemic across technology processes and can lead to harmful impacts regardless of intent. These harmful outcomes, even if inadvertent, create significant challenges for cultivating public trust in artificial intelligence (AI)….(More)”