Explore our articles
View All Results

Stefaan Verhulst

Article by Christopher Mims: “If social media were a literal ecosystem, it would be about as healthy as Cleveland’s Cuyahoga River in the 1960s—when it was so polluted it repeatedly caught fire.

Those conflagrations inspired the creation of the Environmental Protection Agency and the passage of the Clean Water Act. But in 2026, nothing comparable exists for our befouled media landscape.

Which means it’s up to us, as individuals, to stop ingesting the pink slime of AI slop, the forever chemicals of outrage bait and the microplastics of misinformation-for-profit. In an age in which information on the internet is so abundant and so low-quality that it’s essentially noise, job number one is to fight our evolutionary instinct to absorb all available information, and instead filter out unreliable sources and bad data.

Fortunately, there’s a way: critical ignoring.

“It’s not total ignoring,” says Sam Wineburg, who coined the term in 2021. “It’s ignoring after you’ve checked out some initial signals. We think of it as constant vigilance over our own vulnerability.”

Critical ignoring was born of research that Wineburg, an emeritus professor of education at Stanford University, and others did on how the skills of professional fact-checkers could be taught to young people in school. Kids and adults alike need the ability to quickly evaluate the truth of a statement and the reliability of its source, they argued. Since then, the term has taken on a life of its own. It’s become an umbrella for a whole set of skills, some of which might seem counterintuitive.

Here’s the quick-and-dirty on how to start practicing critical ignoring in the year ahead…(More)”.

Critical Ignoring

Paper by Woodrow Hartzog and Jessica M. Silbey: “Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life. They are the mechanisms through which complex societies encourage cooperation and stability, while also adapting to changing circumstances. The real superpower of institutions is their ability to evolve and adapt within a hierarchy of authority and a framework for roles and rules while maintaining legitimacy in the knowledge produced and the actions taken. Purpose-driven institutions built around transparency, cooperation, and accountability empower individuals to take intellectual risks and challenge the status quo. This happens through the machinations of interpersonal relationships within those institutions, which broaden perspectives and strengthen shared commitment to civic goals.

Unfortunately, the affordances of AI systems extinguish these institutional features at every turn. In this essay, we make one simple point: AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other. These systems are anathema to the kind of evolution, transparency, cooperation, and accountability that give vital institutions their purpose and sustainability. In short, current AI systems are a death sentence for civic institutions, and we should treat them as such…(More)”.

How AI Destroys Institutions

Article by R. Trebor Scholz & Mark Esposito: “The digital economy’s story often centers on stock prices and initial public offerings, but the processes and people behind it reveal a very different reality. Across outsourcing hubs like Nairobi, Manila, and Hyderabad, content moderators working for Facebook, OpenAI, and their subcontractors spend hours each day reviewing beheadings, sexual violence, child abuse, and hate speech to train and police AI systems. This form of labor has led many to report severe psychological harm, including depression, anxiety, and post-traumatic stress disorder. Investigations have documented suicide attempts among moderators in Kenya and the Philippines, alongside widespread reports of suicidal ideation linked to relentless exposure to traumatic content, low pay, and a lack of mental-health support. These incidents are not isolated tragedies, but rather symptoms of an industry structured to offload risk downward through opaque contracting chains while concentrating profit and control at the top.

These cases are a stark reminder that when technological systems are designed solely for extraction and efficiency, they isolate and break the people who sustain them. As artificial intelligence (AI) accelerates, we face a similar precipice. Without deliberate intervention, these extractive logics will scale globally, further concentrating power at the top, unless we choose to build a fundamentally different system…(More)”.

Building a Solidarity Ecosystem for AI

Book by Allison Pugh: “With the rapid development of artificial intelligence and labor-saving technologies like self-checkouts and automated factories, the future of work has never been more uncertain, and even jobs requiring high levels of human interaction are no longer safe. The Last Human Job explores the human connections that underlie our work, arguing that what people do for each other in these settings is valuable and worth preserving.

Drawing on in-depth interviews and observations with people in a broad range of professions—from physicians, teachers, and coaches to chaplains, therapists, caregivers, and hairdressers—Allison Pugh develops the concept of “connective labor,” a kind of work that relies on empathy, the spontaneity of human contact, and a mutual recognition of each other’s humanity. The threats to connective labor are not only those posed by advances in AI or apps; Pugh demonstrates how profit-driven campaigns imposing industrial logic shrink the time for workers to connect, enforce new priorities of data and metrics, and introduce standardized practices that hinder our ability to truly see each other. She concludes with profiles of organizations where connective labor thrives, offering practical steps for building a social architecture that works.

Vividly illustrating how connective labor enriches the lives of individuals and binds our communities together, The Last Human Job is a compelling argument for us to recognize, value, and protect humane work in an increasingly automated and disconnected world…(More)”.

The Last Human Job: Seeing Each Other in an Age of Automation

Blog by Sarah Hubbard and Darshan Goux: “…Public officials now have a myriad of digital deliberation tools and programs to choose from. Some considerations for selecting which tool(s) to use include factors such as whether the technology solution is open-source vs. paid, data collection and retention policies, the engagement modalities it offers (e.g. video, audio, surveys, written input), as well as the procurement processes, staffing requirements, and the overall objectives or scale of the engagement.

Below are a few examples of technologies being used to support public deliberation processes today:

This is just a small sample of the current ecosystem and their applications. The organization People Powered maintains a larger list of digital participation platforms…(More)”.

The Ecosystem of Deliberative Technologies for Public Input

A Primer by Adam Zable, Hannah Chafetz, and Stefaan G. Verhulst: “Philanthropic foundations around the world are beginning to experiment with artificial intelligence (AI) to review proposals, stay up-to-date on the latest research, communicate insights to different audiences, and more. However, questions remain around where AI is most valuable across the grant making cycle, when it should not be used, and what practices and policies are needed to ensure it is applied responsibly.

Screenshot 2026 02 04 at 9.34.29 Am

To address these questions, DATA4Philanthropy reviewed how AI is being used across the grantmaking cycle. This includes: problem definition, prioritization, strategy development, partner identification, grant management, and evaluation and learning. Drawing on desk research conducted between July and December 2025, the primer highlights several examples where philanthropies are already using AI in their work and how they are incorporating human judgement throughout the process. It concludes with a series of recommendations on how philanthropies might begin experimenting with AI…(More)”.

Using Artificial Intelligence in the Grantmaking Process

Article by Roula Khalaf: “Some headlines seem almost designed to elicit weary face-palm emojis on social media about the ignorance of the general public. So it was recently with the news that the UK public believes net migration rose last year (in fact, it fell by two-thirds).

It’s not the only recent instance of people venting about the public’s perceptions being out of touch with reality. Sir Mark Rowley, London’s police chief, told the FT it was “sad and quite frustrating” that more people didn’t know the city was “extraordinarily safe”.

“I think people have a whole load of different reasons for ignoring facts,” he said. “I think some people just want online clicks, some people are angry with the world generally.”

Even Donald Trump, the most successful populist politician in years, seems to have grown fed up with popular opinion. The so-called affordability crisis in the US is a “con job”, he said in December. “Just about everything is down.”

But have most people really become divorced from reality about policy issues they profess to care about, like crime, immigration and the cost of living? Or is something else going on?

Part of the problem is a disconnect between the metrics commonly used by economists, policymakers and journalists, and the ways in which people actually perceive change in their everyday lives. Inflation in the US may have fallen from a peak of more than 7 per cent in 2022 to less than 3 per cent, for example. But that still means prices are rising, just not as quickly as before.

Net migration, too, is in a way a measure of the rate of change. In the year to June 2025, net migration to the UK did indeed plummet by two-thirds. But that still meant a net increase of 204,000 people. And while the public might well notice when the pace of change suddenly speeds up around them, it’s probably harder to spot when it slows somewhat…(More)”.

No, the public is not irredeemably ignorant

Report by the Taskforce on Nature-related Financial Disclosures: “Developed in close collaboration with a wide range of technical experts and market participants across the nature data value chain, these recommendations represent the culmination of four years of research and pilot testing about how best to respond to the nature data challenges identified by companies and financial institutions around the world.  

The eight recommendations proposed by the TNFD seek to catalyse a whole-of-value-chain mindset shift about the discoverability, quality and accessibility of nature-related data as a strategic global public good. Together, they also seek to unlock a much-needed new source of finance for the collection of essential state-of-nature data through the operation of a global common data facility. 

Summary of recommendations

The eight recommendations cover both state-of-nature data used by companies in their assessment of nature-related issues as well as reported data produced by companies, for example about their impacts and dependencies on nature. 

  1. A set of nature data principles to help enhance the quality of state-of-nature data over time
  2. An accompanying set of metadata standards for state-of-nature data
  3. Proposed harmonisation of licensing and usage agreements to reduce the time and cost experienced by market participants to access state-of-nature data to support their assessment and reporting activities
  4. Nature Data Public Facility (NDPF) to provide open access to state-of-nature data related to key use cases for business and finance, including SMEs
  5. Incentives and mechanism for companies to provide qualifying state-of-nature data they have collected on a proprietary basis back into the global public commons through the NDPF
  6. A new international institution, a Nature Data Trust, to generate additional funding for state-of-nature data collection and aggregation by operating the NDPF and drive quality improvements across the value chain in accordance with the principals, metadata standards and common licensing arrangements recommended
  7. nature data measurement protocol to provide market participants with common measurement methodologies for a core set of nature-related dependency and impact metrics, including state-of-nature metrics
  8. Proposal to develop a universal data collection and sharing protocol to streamline the sharing of company data on nature-related impacts and dependencies across value chains…(More)”.
Recommendations for upgrading the nature data value chain for market participants

Article by William Hague: ‘I was born Scottish and I will Never be British,” tweeted Fiona last year on X, with the hashtag “ScottishIndependence”. Jake joined in, with a picture of the saltire, urging his followers to retweet it if they are proud to be Scottish. One Ewan McGregor added to the excitement, insisting “the call for independence is no longer a dream — it’s a democratic necessity”.

But then the internet in Iran was shut down as US bombers attacked the country’s nuclear sites. Suddenly, Fiona, Jake, Ewan and dozens of other keen advocates of Scottish independence stopped posting messages. Last month, as the regime launched its murderous crackdown on peaceful protesters, the same happened again. The truth has been revealed: large numbers of social media accounts with Scottish-sounding names, all advocating the break-up of the UK, are actually Iranian bots.

The disinformation firm Cyabra reported that in May and June last year, before the internet went dark in Iran, 26 per cent of all accounts arguing for Scottish independence were fake. An earlier study by Clemson University found that 4 per cent of all X content relating to independence was linked to a single network of Iranian-backed bots, generating several times more activity than the Scottish National Party.

It is time we recognised democracy is under serious and sustained attack, not only in Ukraine by military invasion, or Hong Kong where it has been ruthlessly quashed, but across the globe…

Yet democracy doesn’t just need defending. It needs renewing. We should expect our parties to produce plans to improve accountability, speed up government and involve responsible citizens. My own list of ideas would include allowing voters to recall MPs who defect to a different party and force them to face a by-election. Having served as an MP for 26 years I cannot imagine how an elected member can look constituents in the eye after so ignoring their wishes. But that is a topical reaction to recent events. More fundamental would be the use of digital technology to speed up dramatically the processes of government. This has begun: the use of AI to analyse rapidly the thousands of responses to a consultation on abolishing Ofwat recently shows how we can use new technologies to improve decisions in a democracy.

Much more use could be made of citizens’ assemblies. Wouldn’t the debates on assisted dying have benefited from parliament convening a body of citizens giving their informed views, as Demos advocated? Or couldn’t ministers have saved themselves the endless U-turns on digital ID if they had asked such an assembly what they thought? If Ireland could sort out its abortion laws that way, many intractable issues could be tackled with the participation of voters…(More)”.

Stop the bots if we want to save democracy

Paper by Maryam Lotfian et al: “The integration of Artificial Intelligence (AI) into Citizen Science (CS) is transforming how communities collect, analyze, and share data, offering opportunities for enhanced efficiency, accuracy, and scalability of CS projects. AI technologies such as natural language processing, anomaly detection systems, and predictive modeling are increasingly being used to address challenges like CS data validation, participant engagement, and large-scale analysis in CS projects. However, this integration also introduces significant risks and challenges, including ethical concerns related to transparency, accountability, and bias, as well as the potential demotivation of participants through automation of meaningful tasks. Furthermore, issues such as algorithmic opacity and data ownership can undermine trust in community-driven projects. This paper explores the dual impact of AI on CS. It emphasizes the need for a balanced approach where technological advancements do not overshadow the foundational principles of community participation, openness, and volunteer-driven efforts. Drawing from insights shared during a panel discussion with experts from diverse fields, this paper provides a roadmap for the responsible integration of AI into CS. Key considerations include developing standards and legal and ethical frameworks, promoting digital inclusivity, balancing technology with human capacity, and ensuring environmental sustainability…(More)”.

A vision for responsible AI integration in citizen science

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday