Article by Gary Forster: “As things stand, we will not be running the 2026 Aid Transparency Index. Not because it isn’t needed. Not because it isn’t effective. But because, in spite of our best efforts, we haven’t been able to secure the funding for it.
This is not a trivial loss. The Aid Transparency Index has been the single most powerful mechanism driving improvements in the quantity and quality of aid data that is published to the International Aid Transparency Initiative, or IATI, Standard. Since 2012, every two years, it has independently assessed and ranked the transparency of the world’s 50 largest aid agencies — organizations responsible for 92% of all spending published in IATI, amounting to $237 billion in 2023 alone.
The index works because it shapes agency behavior. It has encouraged reluctant agencies to start publishing data; motivated those already engaged to improve data quantity and quality; and provided a crucial, independent check on the state of global aid transparency…(More)”.
Article by Matteo Wong: “The internet is growing more hostile to humans. Google results are stuffed with search-optimized spam, unhelpful advertisements, and AI slop. Amazon has become littered with undifferentiated junk. The state of social media, meanwhile—fractured, disorienting, and prone to boosting all manner of misinformation—can be succinctly described as a cesspool.
It’s with some irony, then, that Reddit has become a reservoir of humanity. The platform has itself been called a cesspool, rife with hateful rhetoric and falsehoods. But it is also known for quirky discussions and impassioned debates on any topic among its users. Does charging your brother rent, telling your mom she’s an unwanted guest, or giving your wife a performance review make you an asshole? (Redditors voted no, yes, and “everyone sucks,” respectively.) The site is where fans hash out the best rap album ever and plumbers weigh in on how to unclog a drain. As Google has begun to offer more and more vacuous SEO sites and ads in response to queries, many people have started adding reddit to their searches to find thoughtful, human-written answers: find mosquito in bedroom reddit; fix musty sponge reddit.
But now even Reddit is becoming more artificial. The platform has quietly started beta-testing Reddit Answers, what it calls an “AI-powered conversational interface.” In function and design, the feature—which is so far available only for some users in the U.S.—is basically an AI chatbot. On a new search screen accessible from the homepage, Reddit Answers takes anyone’s queries, trawls the site for relevant discussions and debates, and composes them into a response. In other words, a site that sells itself as a home for “authentic human connection” is now giving humans the option to interact with an algorithm instead…(More)”.
Article by Ethan Singer: “More than 8,000 web pages across more than a dozen U.S. government websites have been taken down since Friday afternoon, a New York Times analysis has found, as federal agencies rush to heed President Trump’s orders targeting diversity initiatives and “gender ideology.”
The purges have removed information about vaccines, veterans’ care, hate crimes and scientific research, among many other topics. Doctors, researchers and other professionals often rely on such government data and advisories. Some government agencies appear to have removed entire sections of their websites, while others are missing only a handful of pages.
More than 3,000 pages from the Census Bureau, the vast majority of which are articles filed under research and methodology. Other missing pages include data stewardship policies and documentation for several data sets and surveys.
More than 1,000 pages from the Office of Justice Programs, including a feature on teenage dating violence and a blog post about grants that have gone toward combating hate crimes.
More than 200 pages from Head Start, a program for low-income children, including advice on helping families establish routines and videos about preventing postpartum depression.
Article by Hansi Lo Wang: “The stability of the federal government’s system for producing statistics, which the U.S. relies on to understand its population and economy, is under threat because of budget concerns, officials and data users warn.
And that’s before any follow-through on the new Trump administration and Republican lawmakers‘ pledges to slash government spending, which could further affect data production.
In recent months, budget shortfalls and the restrictions of short-term funding have led to the end of some datasets by the Bureau of Economic Analysis, known for its tracking of the gross domestic product, and to proposals by the Bureau of Labor Statistics to reduce the number of participants surveyed to produce the monthly jobs report. A “lack of multiyear funding” has also hurt efforts to modernize the software and other technology the BLS needs to put out its data properly, concluded a report by an expert panel tasked with examining multiple botched data releases last year.
Long-term funding questions are also dogging the Census Bureau, which carries out many of the federal government’s surveys and is preparing for the 2030 head count that’s set to be used to redistribute political representation and trillions in public funding across the country. Some census watchers are concerned budget issues may force the bureau to cancel some of its field tests for the upcoming tally, as it did with 2020 census tests for improving the counts in Spanish-speaking communities, rural areas and on Indigenous reservations.
While the statistical agencies have not been named specifically, some advocates are worried that calls to reduce the federal government’s workforce by President Trump and the new Republican-controlled Congress could put the integrity of the country’s data at greater risk…(More)”
Essay by Vint Cerf: “While I am no expert on artificial intelligence (AI), I have some experience with the concept of agents. Thirty-five years ago, my colleague, Robert Kahn, and I explored the idea of knowledge robots (“knowbots” for short) in the context of digital libraries. In principle, a knowbot was a mobile piece of code that could move around the Internet, landing at servers, where they could execute tasks on behalf of users. The concept is mostly related to finding information and processing it on behalf of a user. We imagined that the knowbot code would land at a serving “knowbot hotel” where it would be given access to content and computing capability. The knowbots would be able to clone themselves to execute their objectives in parallel and would return to their origins bearing the results of their work. Modest prototypes were built in the pre-Web era.
In today’s world, artificially intelligent agents are now contemplated that can interact with each other and with information sources found on the Internet. For this to work, it’s my conjecture that a syntax and semantics will need to be developed and perhaps standardized to facilitate inter-agent interaction, agreements, and commitments for work to be performed, as well as a means for conveying results in reliable and unambiguous ways. A primary question for all such concepts starts with “What could possibly go wrong?”
In the context of AI applications and agents, work is underway to answer that question. I recently found one answer to that in the MLCommons AI Safety Working Group and its tool, AILuminate. My coarse sense of this is that AILuminate posts a large and widely varying collection of prompts—not unlike the notion of testing software by fuzzing—looking for inappropriate responses. Large language models (LLMs) can be tested and graded (that’s the hard part) on responses to a wide range of prompts. Some kind of overall safety metric might be established to connect one LLM to another. One might imagine query collections oriented toward exposing particular contextual weaknesses in LLMs. If these ideas prove useful, one could even imagine using them in testing services such as those at Underwriters Laboratory, now called UL Solutions. UL Solutions already offers software testing among its many other services.
Essay by Daniel Immerwahr: “…If every video is a starburst of expression, an extended TikTok session is fireworks in your face for hours. That can’t be healthy, can it? In 2010, the technology writer Nicholas Carr presciently raised this concern in “The Shallows: What the Internet Is Doing to Our Brains,” a Pulitzer Prize finalist. “What the Net seems to be doing,” Carr wrote, “is chipping away my capacity for concentration and contemplation.” He recounted his increased difficulty reading longer works. He wrote of a highly accomplished philosophy student—indeed, a Rhodes Scholar—who didn’t read books at all but gleaned what he could from Google. That student, Carr ominously asserted, “seems more the rule than the exception.”
Carr set off an avalanche. Much read works about our ruined attention include Nir Eyal’s “Indistractable,” Johann Hari’s “Stolen Focus,” Cal Newport’s “Deep Work,” and Jenny Odell’s “How to Do Nothing.” Carr himself has a new book, “Superbloom,” about not only distraction but all the psychological harms of the Internet. We’ve suffered a “fragmentation of consciousness,” Carr writes, our world having been “rendered incomprehensible by information.”
Read one of these books and you’re unnerved. But read two more and the skeptical imp within you awakens. Haven’t critics freaked out about the brain-scrambling power of everything from pianofortes to brightly colored posters? Isn’t there, in fact, a long section in Plato’s Phaedrus in which Socrates argues that writing will wreck people’s memories?…(More)”.
Article by Karen Mansfield, Sakshi Ghai, Thomas Hakman, Nick Ballou, Matti Vuorre, and Andrew K Przybylski: “…we critically evaluate the limitations and underlying challenges of existing research into the negative mental health consequences of internet-mediated technologies on young people. We argue that identifying and proactively addressing consistent shortcomings is the most effective method for building an accurate evidence base for the forthcoming influx of research on the effects of artificial intelligence (AI) on children and adolescents. Basic research, advice for caregivers, and evidence for policy makers should tackle the challenges that led to the misunderstanding of social media harms. The Personal View has four sections: first, we conducted a critical appraisal of recent reviews regarding effects of technology on children and adolescents’ mental health, aimed at identifying limitations in the evidence base; second, we discuss what we think are the most pressing methodological challenges underlying those limitations; third, we propose effective ways to address these limitations, building on robust methodology, with reference to emerging applications in the study of AI and children and adolescents’ wellbeing; and lastly, we articulate steps for conceptualising and rigorously studying the ever-shifting sociotechnological landscape of digital childhood and adolescence. We outline how the most effective approach to understanding how young people shape, and are shaped by, emerging technologies, is by identifying and directly addressing specific challenges. We present an approach grounded in interpreting findings through a coherent and collaborative evidence-based framework in a measured, incremental, and informative way…(More)”
Essay by Julia Freeland Fisher: “Last year, a Harvard study on chatbots drew a startling conclusion: AI companions significantly reduce loneliness. The researchers found that “synthetic conversation partners,” or bots engineered to be caring and friendly, curbed loneliness on par with interacting with a fellow human. The study was silent, however, on the irony behind these findings: synthetic interaction is not a real, lasting connection. Should the price of curing loneliness really be more isolation?
Missing that subtext is emblematic of our times. Near-term upsides often overshadow long-term consequences. Even with important lessons learned about the harms of social media and big tech over the past two decades, today, optimism about AI’s potential is soaring, at least in some circles.
Bots present an especially tempting fix to long-standing capacity constraints across education, health care, and other social services. AI coaches, tutors, navigators, caseworkers, and assistants could overcome the very real challenges—like cost, recruitment, training, and retention—that have made access to vital forms of high-quality human support perennially hard to scale.
But scaling bots that simulate human support presents new risks. What happens if, across a wide range of “human” services, we trade access to more services for fewer human connections?…(More)”.
Essay by Nicholas Carr: “…Communication systems are also transportation systems. Each medium carries information from here to there, whether in the form of thoughts and opinions, commands and decrees, or artworks and entertainments.
What Innis saw is that some media are particularly good at transporting information across space, while others are particularly good at transporting it through time. Some are space-biased while others are time-biased. Each medium’s temporal or spatial emphasis stems from its material qualities. Time-biased media tend to be heavy and durable. They last a long time, but they are not easy to move around. Think of a gravestone carved out of granite or marble. Its message can remain legible for centuries, but only those who visit the cemetery are able to read it. Space-biased media tend to be lightweight and portable. They’re easy to carry, but they decay or degrade quickly. Think of a newspaper printed on cheap, thin stock. It can be distributed in the morning to a large, widely dispersed readership, but by evening it’s in the trash.
Time-biased: The Tripiṭaka Koreana, a collection of Buddhist scriptures carved onto 81,258 wooden blocks in the thirteenth century, in a photo from 2022 Bernard Gagnon / Wikimedia
Because every society organizes and sustains itself through acts of communication, the material biases of media do more than determine how long messages last or how far they reach. They play an important role in shaping a society’s size, form, and character — and ultimately its fate. As the sociologist Andrew Wernick explained in a 1999 essay on Innis, “The portability of media influences the extent, and the durability of media the longevity, of empires, institutions, and cultures.”
In societies where time-biased media are dominant, the emphasis is on tradition and ritual, on maintaining continuity with the past. People are held together by shared beliefs, often religious or mythologic, passed down through generations. Elders are venerated, and power typically resides in a theocracy or monarchy. Because the society lacks the means to transfer knowledge and exert influence across a broad territory, it tends to remain small and insular. If it grows, it does so in a decentralized fashion, through the establishment of self-contained settlements that hold the same traditions and beliefs…(More)”
Article by Megan Mattes and Joanna Massie: “Although one-off democratic innovations like citizens’ assemblies are excellent approaches for tackling a big issue, more embedded types of innovations could be a powerful tool for maintaining an ongoing connection between public interest and political decision-making.
Innovative approaches to maintaining an ongoing, meaningful connection between people and policymakers are underway. In New Westminster, B.C., a standing citizen body called the Community Advisory Assembly has been convened since January 2024 to January 2025.
These citizen advisers are selected through random sampling to ensure the assembly’s demographic makeup is aligned with the overall population.
Over the last year, members have both given input on policy ideas initiated by New Westminster city council and initiated conversations on their own policy priorities. Notes from these discussions are passed on to council and city staff to consider their incorporation into policymaking.
The question is whether the project will live beyond its pilot.
While public opinion is only one ingredient in government decision-making, ensuring democratic innovations are a standard component of policymaking could go a long way to enshrining public dialogue as a valuable governance tool.
Whether through annual participatory budgeting exercises or a standing citizen advisory body, democratic innovations can make public priorities a key focus of policy and restore government accountability to citizens…(More)”.