Is This How Reddit Ends?


Article by Matteo Wong: “The internet is growing more hostile to humans. Google results are stuffed with search-optimized spam, unhelpful advertisements, and AI slop. Amazon has become littered with undifferentiated junk. The state of social media, meanwhile—fractured, disorienting, and prone to boosting all manner of misinformation—can be succinctly described as a cesspool.

It’s with some irony, then, that Reddit has become a reservoir of humanity. The platform has itself been called a cesspool, rife with hateful rhetoric and falsehoods. But it is also known for quirky discussions and impassioned debates on any topic among its users. Does charging your brother rent, telling your mom she’s an unwanted guest, or giving your wife a performance review make you an asshole? (Redditors voted no, yes, and “everyone sucks,” respectively.) The site is where fans hash out the best rap album ever and plumbers weigh in on how to unclog a drain. As Google has begun to offer more and more vacuous SEO sites and ads in response to queries, many people have started adding reddit to their searches to find thoughtful, human-written answers: find mosquito in bedroom redditfix musty sponge reddit.

But now even Reddit is becoming more artificial. The platform has quietly started beta-testing Reddit Answers, what it calls an “AI-powered conversational interface.” In function and design, the feature—which is so far available only for some users in the U.S.—is basically an AI chatbot. On a new search screen accessible from the homepage, Reddit Answers takes anyone’s queries, trawls the site for relevant discussions and debates, and composes them into a response. In other words, a site that sells itself as a home for “authentic human connection” is now giving humans the option to interact with an algorithm instead…(More)”.

Thousands of U.S. Government Web Pages Have Been Taken Down Since Friday


Article by Ethan Singer: “More than 8,000 web pages across more than a dozen U.S. government websites have been taken down since Friday afternoon, a New York Times analysis has found, as federal agencies rush to heed President Trump’s orders targeting diversity initiatives and “gender ideology.”

The purges have removed information about vaccines, veterans’ care, hate crimes and scientific research, among many other topics. Doctors, researchers and other professionals often rely on such government data and advisories. Some government agencies appear to have removed entire sections of their websites, while others are missing only a handful of pages.

Among the pages that have been taken down:

(The links are to archived versions.)

Experts warn about the ‘crumbling infrastructure’ of federal government data


Article by Hansi Lo Wang: “The stability of the federal government’s system for producing statistics, which the U.S. relies on to understand its population and economy, is under threat because of budget concerns, officials and data users warn.

And that’s before any follow-through on the new Trump administration and Republican lawmakers‘ pledges to slash government spending, which could further affect data production.

In recent months, budget shortfalls and the restrictions of short-term funding have led to the end of some datasets by the Bureau of Economic Analysis, known for its tracking of the gross domestic product, and to proposals by the Bureau of Labor Statistics to reduce the number of participants surveyed to produce the monthly jobs report. A “lack of multiyear funding” has also hurt efforts to modernize the software and other technology the BLS needs to put out its data properly, concluded a report by an expert panel tasked with examining multiple botched data releases last year.

Long-term funding questions are also dogging the Census Bureau, which carries out many of the federal government’s surveys and is preparing for the 2030 head count that’s set to be used to redistribute political representation and trillions in public funding across the country. Some census watchers are concerned budget issues may force the bureau to cancel some of its field tests for the upcoming tally, as it did with 2020 census tests for improving the counts in Spanish-speaking communities, rural areas and on Indigenous reservations.

While the statistical agencies have not been named specifically, some advocates are worried that calls to reduce the federal government’s workforce by President Trump and the new Republican-controlled Congress could put the integrity of the country’s data at greater risk…(More)”

Building Safer and Interoperable AI Systems


Essay by Vint Cerf: “While I am no expert on artificial intelligence (AI), I have some experience with the concept of agents. Thirty-five years ago, my colleague, Robert Kahn, and I explored the idea of knowledge robots (“knowbots” for short) in the context of digital libraries. In principle, a knowbot was a mobile piece of code that could move around the Internet, landing at servers, where they could execute tasks on behalf of users. The concept is mostly related to finding information and processing it on behalf of a user. We imagined that the knowbot code would land at a serving “knowbot hotel” where it would be given access to content and computing capability. The knowbots would be able to clone themselves to execute their objectives in parallel and would return to their origins bearing the results of their work. Modest prototypes were built in the pre-Web era.

In today’s world, artificially intelligent agents are now contemplated that can interact with each other and with information sources found on the Internet. For this to work, it’s my conjecture that a syntax and semantics will need to be developed and perhaps standardized to facilitate inter-agent interaction, agreements, and commitments for work to be performed, as well as a means for conveying results in reliable and unambiguous ways. A primary question for all such concepts starts with “What could possibly go wrong?”

In the context of AI applications and agents, work is underway to answer that question. I recently found one answer to that in the MLCommons AI Safety Working Group and its tool, AILuminate. My coarse sense of this is that AILuminate posts a large and widely varying collection of prompts—not unlike the notion of testing software by fuzzing—looking for inappropriate responses. Large language models (LLMs) can be tested and graded (that’s the hard part) on responses to a wide range of prompts. Some kind of overall safety metric might be established to connect one LLM to another. One might imagine query collections oriented toward exposing particular contextual weaknesses in LLMs. If these ideas prove useful, one could even imagine using them in testing services such as those at Underwriters Laboratory, now called UL Solutions. UL Solutions already offers software testing among its many other services.

LLMs as agents seem naturally attractive…(More)”.

The Attention Crisis Is Just a Distraction


Essay by Daniel Immerwahr: “…If every video is a starburst of expression, an extended TikTok session is fireworks in your face for hours. That can’t be healthy, can it? In 2010, the technology writer Nicholas Carr presciently raised this concern in “The Shallows: What the Internet Is Doing to Our Brains,” a Pulitzer Prize finalist. “What the Net seems to be doing,” Carr wrote, “is chipping away my capacity for concentration and contemplation.” He recounted his increased difficulty reading longer works. He wrote of a highly accomplished philosophy student—indeed, a Rhodes Scholar—who didn’t read books at all but gleaned what he could from Google. That student, Carr ominously asserted, “seems more the rule than the exception.”

Carr set off an avalanche. Much read works about our ruined attention include Nir Eyal’s “Indistractable,” Johann Hari’s “Stolen Focus,” Cal Newport’s “Deep Work,” and Jenny Odell’s “How to Do Nothing.” Carr himself has a new book, “Superbloom,” about not only distraction but all the psychological harms of the Internet. We’ve suffered a “fragmentation of consciousness,” Carr writes, our world having been “rendered incomprehensible by information.”

Read one of these books and you’re unnerved. But read two more and the skeptical imp within you awakens. Haven’t critics freaked out about the brain-scrambling power of everything from pianofortes to brightly colored posters? Isn’t there, in fact, a long section in Plato’s Phaedrus in which Socrates argues that writing will wreck people’s memories?…(More)”.

From social media to artificial intelligence: improving research on digital harms in youth


Article by Karen Mansfield, Sakshi Ghai, Thomas Hakman, Nick Ballou, Matti Vuorre, and Andrew K Przybylski: “…we critically evaluate the limitations and underlying challenges of existing research into the negative mental health consequences of internet-mediated technologies on young people. We argue that identifying and proactively addressing consistent shortcomings is the most effective method for building an accurate evidence base for the forthcoming influx of research on the effects of artificial intelligence (AI) on children and adolescents. Basic research, advice for caregivers, and evidence for policy makers should tackle the challenges that led to the misunderstanding of social media harms. The Personal View has four sections: first, we conducted a critical appraisal of recent reviews regarding effects of technology on children and adolescents’ mental health, aimed at identifying limitations in the evidence base; second, we discuss what we think are the most pressing methodological challenges underlying those limitations; third, we propose effective ways to address these limitations, building on robust methodology, with reference to emerging applications in the study of AI and children and adolescents’ wellbeing; and lastly, we articulate steps for conceptualising and rigorously studying the ever-shifting sociotechnological landscape of digital childhood and adolescence. We outline how the most effective approach to understanding how young people shape, and are shaped by, emerging technologies, is by identifying and directly addressing specific challenges. We present an approach grounded in interpreting findings through a coherent and collaborative evidence-based framework in a measured, incremental, and informative way…(More)”

To Bot or Not to Bot? How AI Companions Are Reshaping Human Services and Connection


Essay by Julia Freeland Fisher: “Last year, a Harvard study on chatbots drew a startling conclusion: AI companions significantly reduce loneliness. The researchers found that “synthetic conversation partners,” or bots engineered to be caring and friendly, curbed loneliness on par with interacting with a fellow human. The study was silent, however, on the irony behind these findings: synthetic interaction is not a real, lasting connection. Should the price of curing loneliness really be more isolation?

Missing that subtext is emblematic of our times. Near-term upsides often overshadow long-term consequences. Even with important lessons learned about the harms of social media and big tech over the past two decades, today, optimism about AI’s potential is soaring, at least in some circles.

Bots present an especially tempting fix to long-standing capacity constraints across education, health care, and other social services. AI coaches, tutors, navigators, caseworkers, and assistants could overcome the very real challenges—like cost, recruitment, training, and retention—that have made access to vital forms of high-quality human support perennially hard to scale.

But scaling bots that simulate human support presents new risks. What happens if, across a wide range of “human” services, we trade access to more services for fewer human connections?…(More)”.

The Tyranny of Now


Essay by Nicholas Carr: “…Communication systems are also transportation systems. Each medium carries information from here to there, whether in the form of thoughts and opinions, commands and decrees, or artworks and entertainments.

What Innis saw is that some media are particularly good at transporting information across space, while others are particularly good at transporting it through time. Some are space-biased while others are time-biased. Each medium’s temporal or spatial emphasis stems from its material qualities. Time-biased media tend to be heavy and durable. They last a long time, but they are not easy to move around. Think of a gravestone carved out of granite or marble. Its message can remain legible for centuries, but only those who visit the cemetery are able to read it. Space-biased media tend to be lightweight and portable. They’re easy to carry, but they decay or degrade quickly. Think of a newspaper printed on cheap, thin stock. It can be distributed in the morning to a large, widely dispersed readership, but by evening it’s in the trash.

Time-biased: The Tripiṭaka Koreana, a collection of Buddhist scriptures carved onto 81,258 wooden blocks in the thirteenth century, in a photo from 2022
Bernard Gagnon / Wikimedia

Because every society organizes and sustains itself through acts of communication, the material biases of media do more than determine how long messages last or how far they reach. They play an important role in shaping a society’s size, form, and character — and ultimately its fate. As the sociologist Andrew Wernick explained in a 1999 essay on Innis, “The portability of media influences the extent, and the durability of media the longevity, of empires, institutions, and cultures.”

In societies where time-biased media are dominant, the emphasis is on tradition and ritual, on maintaining continuity with the past. People are held together by shared beliefs, often religious or mythologic, passed down through generations. Elders are venerated, and power typically resides in a theocracy or monarchy. Because the society lacks the means to transfer knowledge and exert influence across a broad territory, it tends to remain small and insular. If it grows, it does so in a decentralized fashion, through the establishment of self-contained settlements that hold the same traditions and beliefs…(More)”

Why Canada needs to embrace innovations in democracy


Article by Megan Mattes and Joanna Massie: “Although one-off democratic innovations like citizens’ assemblies are excellent approaches for tackling a big issue, more embedded types of innovations could be a powerful tool for maintaining an ongoing connection between public interest and political decision-making.

Innovative approaches to maintaining an ongoing, meaningful connection between people and policymakers are underway. In New Westminster, B.C., a standing citizen body called the Community Advisory Assembly has been convened since January 2024 to January 2025.

These citizen advisers are selected through random sampling to ensure the assembly’s demographic makeup is aligned with the overall population.

Over the last year, members have both given input on policy ideas initiated by New Westminster city council and initiated conversations on their own policy priorities. Notes from these discussions are passed on to council and city staff to consider their incorporation into policymaking.

The question is whether the project will live beyond its pilot.

Another similar and hopeful democratic innovation, the City of Toronto’s Planning Review Panel, ran for two terms before it was cancelled. In contrast, both the Paris city council and the state government of Ostbelgien (East Belgium) have convened permanent citizen advisory bodies to work alongside elected officials.

While public opinion is only one ingredient in government decision-making, ensuring democratic innovations are a standard component of policymaking could go a long way to enshrining public dialogue as a valuable governance tool.

Whether through annual participatory budgeting exercises or a standing citizen advisory body, democratic innovations can make public priorities a key focus of policy and restore government accountability to citizens…(More)”.

What’s a Fact, Anyway?


Essay by Fergus McIntosh: “…For journalists, as for anyone, there are certain shortcuts to trustworthiness, including reputation, expertise, and transparency—the sharing of sources, for example, or the prompt correction of errors. Some of these shortcuts are more perilous than others. Various outfits, positioning themselves as neutral guides to the marketplace of ideas, now tout evaluations of news organizations’ trustworthiness, but relying on these requires trusting in the quality and objectivity of the evaluation. Official data is often taken at face value, but numbers can conceal motives: think of the dispute over how to count casualties in recent conflicts. Governments, meanwhile, may use their powers over information to suppress unfavorable narratives: laws originally aimed at misinformation, many enacted during the COVID-19 pandemic, can hinder free expression. The spectre of this phenomenon is fuelling a growing backlash in America and elsewhere.

Although some categories of information may come to be considered inherently trustworthy, these, too, are in flux. For decades, the technical difficulty of editing photographs and videos allowed them to be treated, by most people, as essentially incontrovertible. With the advent of A.I.-based editing software, footage and imagery have swiftly become much harder to credit. Similar tools are already used to spoof voices based on only seconds of recorded audio. For anyone, this might manifest in scams (your grandmother calls, but it’s not Grandma on the other end), but for a journalist it also puts source calls into question. Technologies of deception tend to be accompanied by ones of detection or verification—a battery of companies, for example, already promise that they can spot A.I.-manipulated imagery—but they’re often locked in an arms race, and they never achieve total accuracy. Though chatbots and A.I.-enabled search engines promise to help us with research (when a colleague “interviewed” ChatGPT, it told him, “I aim to provide information that is as neutral and unbiased as possible”), their inability to provide sourcing, and their tendency to hallucinate, looks more like a shortcut to nowhere, at least for now. The resulting problems extend far beyond media: election campaigns, in which subtle impressions can lead to big differences in voting behavior, feel increasingly vulnerable to deepfakes and other manipulations by inscrutable algorithms. Like everyone else, journalists have only just begun to grapple with the implications.

In such circumstances, it becomes difficult to know what is true, and, consequently, to make decisions. Good journalism offers a way through, but only if readers are willing to follow: trust and naïveté can feel uncomfortably close. Gaining and holding that trust is hard. But failure—the end point of the story of generational decay, of gold exchanged for dross—is not inevitable. Fact checking of the sort practiced at The New Yorker is highly specific and resource-intensive, and it’s only one potential solution. But any solution must acknowledge the messiness of truth, the requirements of attention, the way we squint to see more clearly. It must tell you to say what you mean, and know that you mean it…(More)”.