Stefaan Verhulst
Article by Gabriel Daros: “Brazil’s social security institute, known as INSS, added AI to its app in 2018 in an effort to cut red tape and speed up claims. The office, known for its long lines and wait times, had around 2 million pending requests for everything from doctor’s appointments to sick pay to pensions to retirement benefits at the time. While the AI-powered tool has since helped process thousands of basic claims, it has also rejected requests from hundreds of people like de Brito — who live in remote areas and have little digital literacy — for minor errors.
The government is right to digitize its systems to improve efficiency, but that has come at a cost, Edjane Rodrigues, secretary for social policies at the National Confederation of Workers in Agriculture, told Rest of World.
“If the government adopts this kind of service to speed up benefits for the people, this is good. We are not against it,” she said. But, particularly among farm workers, claims can be complex because of the nature of their work, she said, referring to cases that require additional paperwork, such as when a piece of land is owned by one individual but worked by a group of families. “There are many peculiarities in agriculture, and rural workers are being especially harmed” by the app, according to Rodrigues.
“Each automated decision is based on specified legal criteria, ensuring that the standards set by the social security legislation are respected,” a spokesperson for INSS told Rest of World. “Automation does not work in an arbitrary manner. Instead, it follows clear rules and regulations, mirroring the expected standards applied in conventional analysis.”
Governments across Latin America have been introducing AI to improve their processes. Last year, Argentina began using ChatGPT to draft court rulings, a move that officials said helped cut legal costs and reduce processing times. Costa Rica has partnered with Microsoft to launch an AI tool to optimize tax data collection and check for fraud in digital tax receipts. El Salvador recently set up an AI lab to develop tools for government services.
But while some of these efforts have delivered promising results, experts have raised concerns about the risk of officials with little tech know-how applying these tools with no transparency or workarounds…(More)”.
Article by Jim Fruchterman and Steve Francis: “What happens when a nonprofit program or an entire organization needs to shut down? The communities being served, and often society as a whole, are the losers. What if it were possible to mitigate some of that damage by sharing valuable intellectual property assets of the closing effort for longer term benefit? Organizations in these tough circumstances must give serious thought to a responsible exit for their intangible assets.
At the present moment of unparalleled disruption, the entire nonprofit sector is rethinking everything: language to describe their work, funding sources, partnerships, and even their continued existence. Nonprofit programs and entire charities will be closing, or being merged out of existence. Difficult choices are being made. Who will fill the role of witness and archivist to preserve the knowledge of these organizations, their writings, media, software, and data, for those who carry on, either now or in the future?
We believe leaders in these tough days should consider a model we’re calling Exit to Open (E2O) and related exit concepts to safeguard these assets going forward…
Exit to Open (E2O) exploits three elements:
- We are in an era where the cost of digital preservation is low; storing a few more bytes for a long time is cheap.
- It’s far more effective for an organization’s staff to isolate and archive critical content than an outsider with limited knowledge attempting to do so later.
- These resources are of greatest use if there is a human available to interpret them, and a deliberate archival process allows for the identification of these potential interpreters…(More)”.
Blog by Anthea Roberts: “…If questioning is indeed becoming a premier cognitive skill in the AI age, how should education and professional development evolve? Here are some possibilities:
- Assessment Through Iterative Questioning: Rather than evaluating students solely on their answers, we might assess their ability to engage in sustained, productive questioning—their skill at probing, following up, identifying inconsistencies, and refining inquiries over multiple rounds. Can they navigate a complex problem through a series of well-crafted questions? Can they identify when an AI response contains subtle errors or omissions that require further exploration?
- Prompt Literacy as Core Curriculum: Just as reading and writing are foundational literacies, the ability to effectively prompt and question AI systems may become a basic skill taught from early education onward. This would include teaching students how to refine queries, test assumptions, and evaluate AI responses critically—recognizing that AI systems still hallucinate, contain biases from their training data, and have uneven performance across different domains.
- Socratic AI Interfaces: Future AI interfaces might be designed explicitly to encourage Socratic dialogue rather than one-sided Q&A. Instead of simply answering queries, these systems might respond with clarifying questions of their own: “It sounds like you’re asking about X—can you tell me more about your specific interest in this area?” This would model the kind of iterative exchange that characterizes productive human-human dialogue…(More)”.
Article by Frank Langfitt: “A survey of more than 500 political scientists finds that the vast majority think the United States is moving swiftly from liberal democracy toward some form of authoritarianism.
In the benchmark survey, known as Bright Line Watch, U.S.-based professors rate the performance of American democracy on a scale from zero (complete dictatorship) to 100 (perfect democracy). After President Trump’s election in November, scholars gave American democracy a rating of 67. Several weeks into Trump’s second term, that figure plummeted to 55.
“That’s a precipitous drop,” says John Carey, a professor of government at Dartmouth and co-director of Bright Line Watch. “There’s certainly consensus: We’re moving in the wrong direction.”…Not all political scientists view Trump with alarm, but many like Carey who focus on democracy and authoritarianism are deeply troubled by Trump’s attempts to expand executive power over his first several months in office.
“We’ve slid into some form of authoritarianism,” says Steven Levitsky, a professor of government at Harvard, and co-author of How Democracies Die. “It is relatively mild compared to some others. It is certainly reversible, but we are no longer living in a liberal democracy.”…Kim Lane Scheppele, a Princeton sociologist who has spent years tracking Hungary, is also deeply concerned: “We are on a very fast slide into what’s called competitive authoritarianism.”
When these scholars use the term “authoritarianism,” they aren’t talking about a system like China’s, a one-party state with no meaningful elections. Instead, they are referring to something called “competitive authoritarianism,” the kind scholars say they see in countries such as Hungary and Turkey.
In a competitive authoritarian system, a leader comes to power democratically and then erodes the system of checks and balances. Typically, the executive fills the civil service and key appointments — including the prosecutor’s office and judiciary — with loyalists. He or she then attacks the media, universities and nongovernmental organizations to blunt public criticism and tilt the electoral playing field in the ruling party’s favor…(More)”.
Essay by John Cassidy: “It isn’t clear where the term “Luddite” originated. Some accounts trace it to Ned Ludd, a textile worker who reportedly smashed a knitting frame in 1779. Others suggest that it may derive from folk memories of King Ludeca, a ninth-century Anglo-Saxon monarch who died in battle. Whatever the source, many machine breakers identified “General Ludd” as their leader. A couple of weeks after the Rawfolds attack, William Horsfall, another mill owner, was shot dead. A letter sent after Horsfall’s assassination—which hailed “the avenging of the death of the two brave youths who fell at the siege of Rawfolds”—began “By Order of General Ludd.”
The British government, at war with Napoleon, regarded the Luddites as Jacobin insurrectionists and responded with brutal suppression. But this reaction stemmed from a fundamental misinterpretation. Far from being revolutionary, Luddism was a defensive response to the industrial capitalism that was threatening skilled workers’ livelihoods. The Luddites weren’t mindless opponents of technology but had a clear logic to their actions—an essentially conservative one. Since they had no political representation—until 1867, the British voting franchise excluded the vast majority—they concluded that violent protest was their only option. “The burning of Factorys or setting fire to the property of People we know is not right, but Starvation forces Nature to do that which he would not,” one Yorkshire cropper wrote. “We have tried every effort to live by Pawning our Cloaths and Chattles, so we are now on the brink for the last struggle.”
As alarm about artificial intelligence has gone global, so has a fascination with the Luddites. The British podcast “The Ned Ludd Radio Hour” describes itself as “your weekly dose of tech skepticism, cynicism, and absurdism.” Kindred themes are explored in the podcast “This Machine Kills,” co-hosted by the social theorist Jathan Sadowski, whose new book, “The Mechanic and the Luddite,” argues that the fetishization of A.I. and other digital technologies obscures their role in disciplining labor and reinforcing a profit-driven system. “Luddites want technology—the future—to work for all of us,” he told the Guardian.The technology journalist Brian Merchant makes a similar case in “Blood in the Machine: The Origins of the Rebellion Against Big Tech” (2023). Blending a vivid account of the original Luddites with an indictment of contemporary tech giants like Amazon and Uber, Merchant portrays the current wave of automation as part of a centuries-long struggle over labor and power. “Working people are staring down entrepreneurs, tech monopolies, and venture capital firms that are hunting for new forms of labor-saving tech—be it AI, robotics, or software automation—to replace them,” Merchant writes. “They are again faced with losing their jobs to the machine.”..(More)”.
Playbook by the Behavioral Insights Team: “…sets out more detailed considerations around embedding test and learn in government, along with a broader range of methods that can be used at different stages of the innovation cycle. These can be combined flexibly, depending on the stage of the policy or service cycle, the available resources, and the nature of the challenge – whether that’s improving services, testing creative new approaches, or navigating uncertainty in new policy areas.
Almost all of the methods set out can be augmented or accelerated by harnessing AI tools – from using AI agents to conduct large-scale qualitative research, to AI-enhanced evidence discovery and analysis, and AI-powered systems mapping and modelling. AI should be treated as a core component of the toolkit at each stage. And the speed of evolution of the application of AI is another strong argument for maintaining an agile mindset and regularly updating our ways of working.
We hope this playbook will make test-and-learn more tangible to people who are new to it, and will expand the toolkit of people who have more experience with the approach. And ultimately we hope it will serve as a practical cheatsheet for building and improving the fabric of life…(More)”.
Essay by Mark Esposito and David De Cremer: “…These advances are not happening in isolation on engineers’ laptops; they are increasingly playing out in “decision theaters” – specialized environments (physical or virtual) designed for interactive, collaborative problem-solving. A decision theater is typically a space equipped with high-resolution displays, simulation engines, and data visualization tools where stakeholders can convene to explore complex scenarios. Originally pioneered at institutions like Arizona State University, the concept of a decision theater has gained traction as a way to bring together diverse expertise – economists, scientists, community leaders, government officials, and now AI systems – under one roof. By visualizing possible futures (say, the spread of a wildfire or the regional impact of an economic policy) in an engaging, shared format, these theaters make foresight a participatory exercise rather than an academic one. In the age of generative AI, decision theaters are evolving into hubs for human-AI collaboration. Picture a scenario where city officials are debating a climate adaptation policy. Inside a decision theater, an AI model might project several climate futures for the city (varying rainfall, extreme heat incidents, flood patterns) on large screens. Stakeholders can literally see the potential impacts on maps and graphs. They can then ask the AI to adjust assumptions – “What if we add more green infrastructure in this district?” – and within seconds, watch a new projection unfold. This real-time interaction allows for an iterative dialogue between human ideas and AI-generated outcomes. Participants can inject local knowledge or voice community values, and the AI will incorporate that input to revise the scenario. The true power of generative AI in a decision theater lies in this collaboration.
Such interactive environments enhance learning and consensus-building. When stakeholders jointly witness how certain choices lead to undesirable futures (for instance, a policy leading to water shortages in a simulation), it can galvanize agreement on preventative action. Moreover, the theater setup encourages asking “What if?” in a safe sandbox, including ethically fraught questions. Because the visualizations make outcomes concrete, they naturally prompt ethical deliberation: If one scenario shows economic growth but high social inequality, is that future acceptable? If not, how can we tweak inputs to produce a more equitable outcome? In this way, decision theaters embed ethical and social considerations into high-tech planning, ensuring that the focus isn’t just on what is likely or profitable but on what is desirable for communities. This participatory approach helps balance technological possibilities with human values and cultural sensitivities. It’s one thing for an AI to suggest an optimal solution on paper; it’s another to have community representatives in the room, engaging with that suggestion and shaping it to fit local norms and needs.
Equally important, decision theaters democratize foresight. They open up complex decision-making processes to diverse stakeholders, not just technical experts. City planners, elected officials, citizens’ groups, and subject matter specialists can all contribute in real time, aided by AI. This inclusive model guards against the risk of AI becoming an opaque oracle controlled by a few. Instead, the AI’s insights are put on display for all to scrutinize and question. By doing so, the process builds trust in the tools and the decisions that come out of them. When people see that an AI’s recommendation emerged from transparent, interactive exploration – rather than a mysterious black box – they may be more likely to trust and accept the outcome. As one policy observer noted, it’s essential to bring ideas from across sectors and disciplines into these AI-assisted discussions so that solutions “work for people, not just companies.” If designed well, decision theaters operationalize that principle…(More)”.
White Paper by the Stanford Institute for Human-Centered AI (HAI), the Asia Foundation and the University of Pretoria: “…maps the LLM development landscape for low-resource languages, highlighting challenges, trade-offs, and strategies to increase investment; prioritize cross-disciplinary, community-driven development; and ensure fair data ownership…
- Large language model (LLM) development suffers from a digital divide: Most major LLMs underperform for non-English—and especially low-resource—languages; are not attuned to relevant cultural contexts; and are not accessible in parts of the Global South.
- Low-resource languages (such as Swahili or Burmese) face two crucial limitations: a scarcity of labeled and unlabeled language data and poor quality data that is not sufficiently representative of the languages and their sociocultural contexts.
- To bridge these gaps, researchers and developers are exploring different technical approaches to developing LLMs that better perform for and represent low-resource languages but come with different trade-offs:
- Massively multilingual models, developed primarily by large U.S.-based firms, aim to improve performance for more languages by including a wider range of (100-plus) languages in their training datasets.
- Regional multilingual models, developed by academics, governments, and nonprofits in the Global South, use smaller training datasets made up of 10-20 low-resource languages to better cater to and represent a smaller group of languages and cultures.
- Monolingual or monocultural models, developed by a variety of public and private actors, are trained on or fine-tuned for a single low-resource language and thus tailored to perform well for that language…(More)”
Series edited by Taylor Owen and Sequoia Kim: “Democracy has undergone profound changes over the past decade, shaped by rapid technological, social, and political transformations. Across the globe, citizens are demanding more meaningful and sustained engagement in governance—especially around emerging technologies like artificial intelligence (AI), which increasingly shape the contours of public life.
From world-leading experts in deliberative democracy, civic technology, and AI governance we introduce a seven-part essay series exploring how deliberative democratic processes like citizen’s assemblies and civic tech can strengthen AI governance…(More)”.
Paper by Ankushi Mitra: “Research transparency and data access are considered increasingly important for advancing research credibility, cumulative learning, and discovery. However, debates persist about how to define and achieve these goals across diverse forms of inquiry. This article intervenes in these debates, arguing that the participants and communities with whom scholars work are active stakeholders in science, and thus have a range of rights, interests, and researcher obligations to them in the practice of transparency and openness. Drawing on civically engaged research and related approaches that advocate for subjects of inquiry to more actively shape its process and share in its benefits, I outline a broader vision of research openness not only as a matter of peer scrutiny among scholars or a top-down exercise in compliance, but rather as a space for engaging and maximizing opportunities for all stakeholders in research. Accordingly, this article provides an ethical and practical framework for broadening transparency, accessibility, and data-sharing and benefit-sharing in research. It promotes movement beyond open science to a more inclusive and socially responsive science anchored in a larger ethical commitment: that the pursuit of knowledge be accountable and its benefits made accessible to the citizens and communities who make it possible…(More)”.