Exit to Open


Article by Jim Fruchterman and Steve Francis: “What happens when a nonprofit program or an entire organization needs to shut down? The communities being served, and often society as a whole, are the losers. What if it were possible to mitigate some of that damage by sharing valuable intellectual property assets of the closing effort for longer term benefit? Organizations in these tough circumstances must give serious thought to a responsible exit for their intangible assets.

At the present moment of unparalleled disruption, the entire nonprofit sector is rethinking everything: language to describe their work, funding sources, partnerships, and even their continued existence. Nonprofit programs and entire charities will be closing, or being merged out of existence. Difficult choices are being made. Who will fill the role of witness and archivist to preserve the knowledge of these organizations, their writings, media, software, and data, for those who carry on, either now or in the future?

We believe leaders in these tough days should consider a model we’re calling Exit to Open (E2O) and related exit concepts to safeguard these assets going forward…

Exit to Open (E2O) exploits three elements:

  1. We are in an era where the cost of digital preservation is low; storing a few more bytes for a long time is cheap.
  2. It’s far more effective for an organization’s staff to isolate and archive critical content than an outsider with limited knowledge attempting to do so later.
  3. These resources are of greatest use if there is a human available to interpret them, and a deliberate archival process allows for the identification of these potential interpreters…(More)”.

From Answer-Giving to Question-Asking: Inverting the Socratic Method in the Age of AI


Blog by Anthea Roberts: “…If questioning is indeed becoming a premier cognitive skill in the AI age, how should education and professional development evolve? Here are some possibilities:

  1. Assessment Through Iterative Questioning: Rather than evaluating students solely on their answers, we might assess their ability to engage in sustained, productive questioning—their skill at probing, following up, identifying inconsistencies, and refining inquiries over multiple rounds. Can they navigate a complex problem through a series of well-crafted questions? Can they identify when an AI response contains subtle errors or omissions that require further exploration?
  2. Prompt Literacy as Core Curriculum: Just as reading and writing are foundational literacies, the ability to effectively prompt and question AI systems may become a basic skill taught from early education onward. This would include teaching students how to refine queries, test assumptions, and evaluate AI responses critically—recognizing that AI systems still hallucinate, contain biases from their training data, and have uneven performance across different domains.
  3. Socratic AI Interfaces: Future AI interfaces might be designed explicitly to encourage Socratic dialogue rather than one-sided Q&A. Instead of simply answering queries, these systems might respond with clarifying questions of their own: “It sounds like you’re asking about X—can you tell me more about your specific interest in this area?” This would model the kind of iterative exchange that characterizes productive human-human dialogue…(More)”.

Hundreds of scholars say U.S. is swiftly heading toward authoritarianism


Article by Frank Langfitt: “A survey of more than 500 political scientists finds that the vast majority think the United States is moving swiftly from liberal democracy toward some form of authoritarianism.

In the benchmark survey, known as Bright Line Watch, U.S.-based professors rate the performance of American democracy on a scale from zero (complete dictatorship) to 100 (perfect democracy). After President Trump’s election in November, scholars gave American democracy a rating of 67. Several weeks into Trump’s second term, that figure plummeted to 55.

“That’s a precipitous drop,” says John Carey, a professor of government at Dartmouth and co-director of Bright Line Watch. “There’s certainly consensus: We’re moving in the wrong direction.”…Not all political scientists view Trump with alarm, but many like Carey who focus on democracy and authoritarianism are deeply troubled by Trump’s attempts to expand executive power over his first several months in office.

“We’ve slid into some form of authoritarianism,” says Steven Levitsky, a professor of government at Harvard, and co-author of How Democracies Die. “It is relatively mild compared to some others. It is certainly reversible, but we are no longer living in a liberal democracy.”…Kim Lane Scheppele, a Princeton sociologist who has spent years tracking Hungary, is also deeply concerned: “We are on a very fast slide into what’s called competitive authoritarianism.”

When these scholars use the term “authoritarianism,” they aren’t talking about a system like China’s, a one-party state with no meaningful elections. Instead, they are referring to something called “competitive authoritarianism,” the kind scholars say they see in countries such as Hungary and Turkey.

In a competitive authoritarian system, a leader comes to power democratically and then erodes the system of checks and balances. Typically, the executive fills the civil service and key appointments — including the prosecutor’s office and judiciary — with loyalists. He or she then attacks the media, universities and nongovernmental organizations to blunt public criticism and tilt the electoral playing field in the ruling party’s favor…(More)”.

How to Survive the A.I. Revolution


Essay by John Cassidy: “It isn’t clear where the term “Luddite” originated. Some accounts trace it to Ned Ludd, a textile worker who reportedly smashed a knitting frame in 1779. Others suggest that it may derive from folk memories of King Ludeca, a ninth-century Anglo-Saxon monarch who died in battle. Whatever the source, many machine breakers identified “General Ludd” as their leader. A couple of weeks after the Rawfolds attack, William Horsfall, another mill owner, was shot dead. A letter sent after Horsfall’s assassination—which hailed “the avenging of the death of the two brave youths who fell at the siege of Rawfolds”—began “By Order of General Ludd.”

The British government, at war with Napoleon, regarded the Luddites as Jacobin insurrectionists and responded with brutal suppression. But this reaction stemmed from a fundamental misinterpretation. Far from being revolutionary, Luddism was a defensive response to the industrial capitalism that was threatening skilled workers’ livelihoods. The Luddites weren’t mindless opponents of technology but had a clear logic to their actions—an essentially conservative one. Since they had no political representation—until 1867, the British voting franchise excluded the vast majority—they concluded that violent protest was their only option. “The burning of Factorys or setting fire to the property of People we know is not right, but Starvation forces Nature to do that which he would not,” one Yorkshire cropper wrote. “We have tried every effort to live by Pawning our Cloaths and Chattles, so we are now on the brink for the last struggle.”

As alarm about artificial intelligence has gone global, so has a fascination with the Luddites. The British podcast “The Ned Ludd Radio Hour” describes itself as “your weekly dose of tech skepticism, cynicism, and absurdism.” Kindred themes are explored in the podcast “This Machine Kills,” co-hosted by the social theorist Jathan Sadowski, whose new book, “The Mechanic and the Luddite,” argues that the fetishization of A.I. and other digital technologies obscures their role in disciplining labor and reinforcing a profit-driven system. “Luddites want technology—the future—to work for all of us,” he told the Guardian.The technology journalist Brian Merchant makes a similar case in “Blood in the Machine: The Origins of the Rebellion Against Big Tech” (2023). Blending a vivid account of the original Luddites with an indictment of contemporary tech giants like Amazon and Uber, Merchant portrays the current wave of automation as part of a centuries-long struggle over labor and power. “Working people are staring down entrepreneurs, tech monopolies, and venture capital firms that are hunting for new forms of labor-saving tech—be it AI, robotics, or software automation—to replace them,” Merchant writes. “They are again faced with losing their jobs to the machine.”..(More)”.

Test and learn: a playbook for mission-driven government


Playbook by the Behavioral Insights Team: “…sets out more detailed considerations around embedding test and learn in government, along with a broader range of methods that can be used at different stages of the innovation cycle. These can be combined flexibly, depending on the stage of the policy or service cycle, the available resources, and the nature of the challenge – whether that’s improving services, testing creative new approaches, or navigating uncertainty in new policy areas.

Almost all of the methods set out can be augmented or accelerated by harnessing AI tools – from using AI agents to conduct large-scale qualitative research, to AI-enhanced evidence discovery and analysis, and AI-powered systems mapping and modelling. AI should be treated as a core component of the toolkit at each stage.  And the speed of evolution of the application of AI is another strong argument for maintaining an agile mindset and regularly updating our ways of working. 

We hope this playbook will make test-and-learn more tangible to people who are new to it, and will expand the toolkit of people who have more experience with the approach. And ultimately we hope it will serve as a practical cheatsheet for building and improving the fabric of life…(More)”.

The Future is Coded: How AI is Rewriting the Rules of Decision Theaters


Essay by Mark Esposito and David De Cremer: “…These advances are not happening in isolation on engineers’ laptops; they are increasingly playing out in “decision theaters” – specialized environments (physical or virtual) designed for interactive, collaborative problem-solving. A decision theater is typically a space equipped with high-resolution displays, simulation engines, and data visualization tools where stakeholders can convene to explore complex scenarios. Originally pioneered at institutions like Arizona State University, the concept of a decision theater has gained traction as a way to bring together diverse expertise – economists, scientists, community leaders, government officials, and now AI systems – under one roof. By visualizing possible futures (say, the spread of a wildfire or the regional impact of an economic policy) in an engaging, shared format, these theaters make foresight a participatory exercise rather than an academic one. In the age of generative AI, decision theaters are evolving into hubs for human-AI collaboration. Picture a scenario where city officials are debating a climate adaptation policy. Inside a decision theater, an AI model might project several climate futures for the city (varying rainfall, extreme heat incidents, flood patterns) on large screens. Stakeholders can literally see the potential impacts on maps and graphs. They can then ask the AI to adjust assumptions – “What if we add more green infrastructure in this district?” – and within seconds, watch a new projection unfold. This real-time interaction allows for an iterative dialogue between human ideas and AI-generated outcomes. Participants can inject local knowledge or voice community values, and the AI will incorporate that input to revise the scenario. The true power of generative AI in a decision theater lies in this collaboration.

Such interactive environments enhance learning and consensus-building. When stakeholders jointly witness how certain choices lead to undesirable futures (for instance, a policy leading to water shortages in a simulation), it can galvanize agreement on preventative action. Moreover, the theater setup encourages asking “What if?” in a safe sandbox, including ethically fraught questions. Because the visualizations make outcomes concrete, they naturally prompt ethical deliberation: If one scenario shows economic growth but high social inequality, is that future acceptable? If not, how can we tweak inputs to produce a more equitable outcome? In this way, decision theaters embed ethical and social considerations into high-tech planning, ensuring that the focus isn’t just on what is likely or profitable but on what is desirable for communities. This participatory approach helps balance technological possibilities with human values and cultural sensitivities. It’s one thing for an AI to suggest an optimal solution on paper; it’s another to have community representatives in the room, engaging with that suggestion and shaping it to fit local norms and needs.

Equally important, decision theaters democratize foresight. They open up complex decision-making processes to diverse stakeholders, not just technical experts. City planners, elected officials, citizens’ groups, and subject matter specialists can all contribute in real time, aided by AI. This inclusive model guards against the risk of AI becoming an opaque oracle controlled by a few. Instead, the AI’s insights are put on display for all to scrutinize and question. By doing so, the process builds trust in the tools and the decisions that come out of them. When people see that an AI’s recommendation emerged from transparent, interactive exploration – rather than a mysterious black box – they may be more likely to trust and accept the outcome. As one policy observer noted, it’s essential to bring ideas from across sectors and disciplines into these AI-assisted discussions so that solutions “work for people, not just companies.” If designed well, decision theaters operationalize that principle…(More)”.

Mind the (Language) Gap: Mapping the Challenges of LLM Development in Low-Resource Language Contexts


White Paper by the Stanford Institute for Human-Centered AI (HAI), the Asia Foundation and the University of Pretoria: “…maps the LLM development landscape for low-resource languages, highlighting challenges, trade-offs, and strategies to increase investment; prioritize cross-disciplinary, community-driven development; and ensure fair data ownership…

  • Large language model (LLM) development suffers from a digital divide: Most major LLMs underperform for non-English—and especially low-resource—languages; are not attuned to relevant cultural contexts; and are not accessible in parts of the Global South.
  • Low-resource languages (such as Swahili or Burmese) face two crucial limitations: a scarcity of labeled and unlabeled language data and poor quality data that is not sufficiently representative of the languages and their sociocultural contexts.
  • To bridge these gaps, researchers and developers are exploring different technical approaches to developing LLMs that better perform for and represent low-resource languages but come with different trade-offs:
    • Massively multilingual models, developed primarily by large U.S.-based firms, aim to improve performance for more languages by including a wider range of (100-plus) languages in their training datasets.
    • Regional multilingual models, developed by academics, governments, and nonprofits in the Global South, use smaller training datasets made up of 10-20 low-resource languages to better cater to and represent a smaller group of languages and cultures.
    • Monolingual or monocultural models, developed by a variety of public and private actors, are trained on or fine-tuned for a single low-resource language and thus tailored to perform well for that language…(More)”

Deliberative Approaches to Inclusive Governance


Series edited by Taylor Owen and Sequoia Kim: “Democracy has undergone profound changes over the past decade, shaped by rapid technological, social, and political transformations. Across the globe, citizens are demanding more meaningful and sustained engagement in governance—especially around emerging technologies like artificial intelligence (AI), which increasingly shape the contours of public life.

From world-leading experts in deliberative democracy, civic technology, and AI governance we introduce a seven-part essay series exploring how deliberative democratic processes like citizen’s assemblies and civic tech can strengthen AI governance…(More)”.

Open with care: transparency and data sharing in civically engaged research


Paper by Ankushi Mitra: “Research transparency and data access are considered increasingly important for advancing research credibility, cumulative learning, and discovery. However, debates persist about how to define and achieve these goals across diverse forms of inquiry. This article intervenes in these debates, arguing that the participants and communities with whom scholars work are active stakeholders in science, and thus have a range of rights, interests, and researcher obligations to them in the practice of transparency and openness. Drawing on civically engaged research and related approaches that advocate for subjects of inquiry to more actively shape its process and share in its benefits, I outline a broader vision of research openness not only as a matter of peer scrutiny among scholars or a top-down exercise in compliance, but rather as a space for engaging and maximizing opportunities for all stakeholders in research. Accordingly, this article provides an ethical and practical framework for broadening transparency, accessibility, and data-sharing and benefit-sharing in research. It promotes movement beyond open science to a more inclusive and socially responsive science anchored in a larger ethical commitment: that the pursuit of knowledge be accountable and its benefits made accessible to the citizens and communities who make it possible…(More)”.

Artificial Intelligence and Big Data


Book edited by Frans L. Leeuw and Michael Bamberger: “…explores how Artificial Intelligence (AI) and Big Data contribute to the evaluation of the rule of law (covering legal arrangements, empirical legal research, law and technology, and international law), and social and economic development programs in both industrialized and developing countries. Issues of ethics and bias in the use of AI are also addressed and indicators of the growth of knowledge in the field are discussed.

Interdisciplinary and international in scope, and bringing together leading academics and practitioners from across the globe, the book explores the applications of AI and big data in Rule of Law and development evaluation, identifies differences in the approaches used in the two fields, and how each could learn from the approaches used in the other, as well as differences in the AI-related issues addressed in industrialized nations compared to those addressed in Africa and Asia.

Artificial Intelligence and Big Data is an essential read for researchers, academics and students working in the fields of Rule of Law and Development, and researchers in institutions working on new applications in AI will all benefit from the book’s practical insights…(More)”.