Digitally Invisible: How the Internet is Creating the New Underclass


Book by Nicol Turner Lee: “President Joe Biden has repeatedly said that the United States would close the digital divide under his leadership. However, the divide still affects people and communities across the country. The complex and persistent reality is that millions of residents live in digital deserts, and many more face disproportionate difficulties when it comes to getting and staying online, especially people of color, seniors, rural residents, and farmers in remote areas.

Economic and health disparities are worsening in rural communities without available internet access. Students living in urban digital deserts with little technology exposure are ill prepared to compete for emerging occupations. Even seniors struggle to navigate the aging process without access to online information and remote care.

In this book, Nicol Turner Lee, a leading expert on the American digital divide, uses personal stories from individuals around the country to show how the emerging digital underclass is navigating the spiraling online economy, while sharing their joys and hopes for an equitable and just future.

Turner Lee argues that achieving digital equity is crucial for the future of America’s global competitiveness and requires radical responses to offset the unintended consequences of increasing digitization. In the end, “Digitally Invisible” proposes a pathway to more equitable access to existing and emerging technologies, while encouraging readers to weigh in on this shared goal…(More)”.

The Data That Powers A.I. Is Disappearing Fast


Article by Kevin Roose: “For years, the people building powerful artificial intelligence systems have used enormous troves of text, images and videos pulled from the internet to train their models.

Now, that data is drying up.

Over the past year, many of the most important web sources used for training A.I. models have restricted the use of their data, according to a study published this week by the Data Provenance Initiative, an M.I.T.-led research group.

The study, which looked at 14,000 web domains that are included in three commonly used A.I. training data sets, discovered an “emerging crisis in consent,” as publishers and online platforms have taken steps to prevent their data from being harvested.

The researchers estimate that in the three data sets — called C4, RefinedWeb and Dolma — 5 percent of all data, and 25 percent of data from the highest-quality sources, has been restricted. Those restrictions are set up through the Robots Exclusion Protocol, a decades-old method for website owners to prevent automated bots from crawling their pages using a file called robots.txt.

The study also found that as much as 45 percent of the data in one set, C4, had been restricted by websites’ terms of service.

“We’re seeing a rapid decline in consent to use data across the web that will have ramifications not just for A.I. companies, but for researchers, academics and noncommercial entities,” said Shayne Longpre, the study’s lead author, in an interview.

Data is the main ingredient in today’s generative A.I. systems, which are fed billions of examples of text, images and videos. Much of that data is scraped from public websites by researchers and compiled in large data sets, which can be downloaded and freely used, or supplemented with data from other sources…(More)”.

Governance of deliberative mini-publics: emerging consensus and divergent views


Paper by Lucy J. Parry, Nicole Curato, and , and John S. Dryzek: “Deliberative mini-publics are forums for citizen deliberation composed of randomly selected citizens convened to yield policy recommendations. These forums have proliferated in recent years but there are no generally accepted standards to govern their practice. Should there be? We answer this question by bringing the scholarly literature on citizen deliberation into dialogue with the lived experience of the people who study, design and implement mini-publics. We use Q methodology to locate five distinct perspectives on the integrity of mini-publics, and map the structure of agreement and dispute across them. We find that, across the five viewpoints, there is emerging consensus as well as divergence on integrity issues, with disagreement over what might be gained or lost by adapting common standards of practice, and possible sources of integrity risks. This article provides an empirical foundation for further discussion on integrity standards in the future…(More)”.

The Five Stages Of AI Grief


Essay by Benjamin Bratton: “Alignment” toward “human-centered AI” are just words representing our hopes and fears related to how AI feels like it is out of control — but also to the idea that complex technologies were never under human control to begin with. For reasons more political than perceptive, some insist that “AI” is not even “real,” that it is just math or just an ideological construction of capitalism turning itself into a naturalized fact. Some critics are clearly very angry at the all-too-real prospects of pervasive machine intelligence. Others recognize the reality of AI but are convinced it is something that can be controlled by legislative sessions, policy papers and community workshops. This does not ameliorate the depression felt by still others, who foresee existential catastrophe.

All these reactions may confuse those who see the evolution of machine intelligence, and the artificialization of intelligence itself, as an overdetermined consequence of deeper developments. What to make of these responses?

Sigmund Freud used the term “Copernican” to describe modern decenterings of the human from a place of intuitive privilege. After Nicolaus Copernicus and Charles Darwin, he nominated psychoanalysis as the third such revolution. He also characterized the response to such decenterings as “traumas.”

Trauma brings grief. This is normal. In her 1969 book, “On Death and Dying,” the Swiss psychiatrist Elizabeth Kübler-Ross identified the “five stages of grief”: denial, anger, bargaining, depression and acceptance. Perhaps Copernican Traumas are no different…(More)”.

The Department of Everything


Article by Stephen Akey: “How do you find the life expectancy of a California condor? Google it. Or the gross national product of Morocco? Google it. Or the final resting place of Tom Paine? Google it. There was a time, however—not all that long ago—when you couldn’t Google it or ask Siri or whatever cyber equivalent comes next. You had to do it the hard way—by consulting reference books, indexes, catalogs, almanacs, statistical abstracts, and myriad other printed sources. Or you could save yourself all that time and trouble by taking the easiest available shortcut: You could call me.

From 1984 to 1988, I worked in the Telephone Reference Division of the Brooklyn Public Library. My seven or eight colleagues and I spent the days (and nights) answering exactly such questions. Our callers were as various as New York City itself: copyeditors, fact checkers, game show aspirants, journalists, bill collectors, bet settlers, police detectives, students and teachers, the idly curious, the lonely and loquacious, the park bench crazies, the nervously apprehensive. (This last category comprised many anxious patients about to undergo surgery who called us for background checks on their doctors.) There were telephone reference divisions in libraries all over the country, but this being New York City, we were an unusually large one with an unusually heavy volume of calls. And if I may say so, we were one of the best. More than one caller told me that we were a legend in the world of New York magazine publishing…(More)”.

Reliability of U.S. Economic Data Is in Jeopardy, Study Finds


Article by Ben Casselman: “A report says new approaches and increased spending are needed to ensure that government statistics remain dependable and free of political influence.

Federal Reserve officials use government data to help determine when to raise or lower interest rates. Congress and the White House use it to decide when to extend jobless benefits or send out stimulus payments. Investors place billions of dollars worth of bets that are tied to monthly reports on job growth, inflation and retail sales.

But a new study says the integrity of that data is in increasing jeopardy.

The report, issued on Tuesday by the American Statistical Association, concludes that government statistics are reliable right now. But that could soon change, the study warns, citing factors including shrinking budgets, falling survey response rates and the potential for political interference.

The authors — statisticians from George Mason University, the Urban Institute and other institutions — likened the statistical system to physical infrastructure like highways and bridges: vital, but often ignored until something goes wrong.

“We do identify this sort of downward spiral as a threat, and that’s what we’re trying to counter,” said Nancy Potok, who served as chief statistician of the United States from 2017 to 2019 and was one of the report’s authors. “We’re not there yet, but if we don’t do something, that threat could become a reality, and in the not-too-distant future.”

The report, “The Nation’s Data at Risk,” highlights the threats facing statistics produced across the federal government, including data on education, health, crime and demographic trends.

But the risks to economic data are particularly notable because of the attention it receives from policymakers and investors. Most of that data is based on surveys of households or businesses. And response rates to government surveys have plummeted in recent years, as they have for private polls. The response rate to the Current Population Survey — the monthly survey of about 60,000 households that is the basis for the unemployment rate and other labor force statistics — has fallen to about 70 percent in recent months, from nearly 90 percent a decade ago…(More)”.

Precision public health in the era of genomics and big data


Paper by Megan C. Roberts et al: “Precision public health (PPH) considers the interplay between genetics, lifestyle and the environment to improve disease prevention, diagnosis and treatment on a population level—thereby delivering the right interventions to the right populations at the right time. In this Review, we explore the concept of PPH as the next generation of public health. We discuss the historical context of using individual-level data in public health interventions and examine recent advancements in how data from human and pathogen genomics and social, behavioral and environmental research, as well as artificial intelligence, have transformed public health. Real-world examples of PPH are discussed, emphasizing how these approaches are becoming a mainstay in public health, as well as outstanding challenges in their development, implementation and sustainability. Data sciences, ethical, legal and social implications research, capacity building, equity research and implementation science will have a crucial role in realizing the potential for ‘precision’ to enhance traditional public health approaches…(More)”.

Integrating Artificial Intelligence into Citizens’ Assemblies: Benefits, Concerns and Future Pathways


Paper by Sammy McKinney: “Interest in how Artificial Intelligence (AI) could be used within citizens’ assemblies (CAs) is emerging amongst scholars and practitioners alike. In this paper, I make four contributions at the intersection of these burgeoning fields. First, I propose an analytical framework to guide evaluations of the benefits and limitations of AI applications in CAs. Second, I map out eleven ways that AI, especially large language models (LLMs), could be used across a CAs full lifecycle. This introduces novel ideas for AI integration into the literature and synthesises existing proposals to provide the most detailed analytical breakdown of AI applications in CAs to date. Third, drawing on relevant literature, four key informant interviews, and the Global Assembly on the Ecological and Climate crisis as a case study, I apply my analytical framework to assess the desirability of each application. This provides insight into how AI could be deployed to address existing  challenges facing CAs today as well as the concerns that arise with AI integration. Fourth, bringing my analyses together, I argue that AI integration into CAs brings the potential to enhance their democratic quality and institutional capacity, but realising this requires the deliberative community to proceed cautiously, effectively navigate challenging trade-offs, and mitigate important concerns that arise with AI integration. Ultimately, this paper provides a foundation that can guide future research concerning AI integration into CAs and other forms of democratic innovation…(More)”.

An Algorithm Told Police She Was Safe. Then Her Husband Killed Her.


Article by Adam Satariano and Roser Toll Pifarré: “Spain has become dependent on an algorithm to combat gender violence, with the software so woven into law enforcement that it is hard to know where its recommendations end and human decision-making begins. At its best, the system has helped police protect vulnerable women and, overall, has reduced the number of repeat attacks in domestic violence cases. But the reliance on VioGén has also resulted in victims, whose risk levels are miscalculated, getting attacked again — sometimes leading to fatal consequences.

Spain now has 92,000 active cases of gender violence victims who were evaluated by VioGén, with most of them — 83 percent — classified as facing little risk of being hurt by their abuser again. Yet roughly 8 percent of women who the algorithm found to be at negligible risk and 14 percent at low risk have reported being harmed again, according to Spain’s Interior Ministry, which oversees the system.

At least 247 women have also been killed by their current or former partner since 2007 after being assessed by VioGén, according to government figures. While that is a tiny fraction of gender violence cases, it points to the algorithm’s flaws. The New York Times found that in a judicial review of 98 of those homicides, 55 of the slain women were scored by VioGén as negligible or low risk for repeat abuse…(More)”.

Drivers of Trust in Public Institutions


Press Release: “In an increasingly challenging environment – marked by successive economic shocks, rising protectionism, the war in Europe and ongoing conflicts in the Middle East, as well as structural challenges and disruptions caused by rapid technological developments, climate change and population aging – 44% of respondents now have low or no trust in their national government, surpassing the 39% of respondents who express high or moderately high trust in national government, according to a new OECD report.  

OECD Survey on Drivers of Trust in Public Institutions – 2024 Results, presents findings from the second OECD Trust Survey, conducted in October and November 2023 across 30 Member countries. The biennial report offers a comprehensive analysis of current trust levels and their drivers across countries and public institutions. 

This edition of the Trust Survey confirms the previous finding that socio-economic and demographic factors, as well as a sense of having a say in decision making, affect trust. For example, 36% of women reported high or moderately high trust in government, compared to 43% of men. The most significant drop in trust since 2021 is seen among women and those with lower levels of education. The trust gap is largest between those who feel they have a say and those who feel they do not have a say in what the government does. Among those who report they have a say, 69% report high or moderately high trust in their national government, whereas among those who feel they do not only 22% do…(More)”.