The Future is Coded: How AI is Rewriting the Rules of Decision Theaters


Essay by Mark Esposito and David De Cremer: “…These advances are not happening in isolation on engineers’ laptops; they are increasingly playing out in “decision theaters” – specialized environments (physical or virtual) designed for interactive, collaborative problem-solving. A decision theater is typically a space equipped with high-resolution displays, simulation engines, and data visualization tools where stakeholders can convene to explore complex scenarios. Originally pioneered at institutions like Arizona State University, the concept of a decision theater has gained traction as a way to bring together diverse expertise – economists, scientists, community leaders, government officials, and now AI systems – under one roof. By visualizing possible futures (say, the spread of a wildfire or the regional impact of an economic policy) in an engaging, shared format, these theaters make foresight a participatory exercise rather than an academic one. In the age of generative AI, decision theaters are evolving into hubs for human-AI collaboration. Picture a scenario where city officials are debating a climate adaptation policy. Inside a decision theater, an AI model might project several climate futures for the city (varying rainfall, extreme heat incidents, flood patterns) on large screens. Stakeholders can literally see the potential impacts on maps and graphs. They can then ask the AI to adjust assumptions – “What if we add more green infrastructure in this district?” – and within seconds, watch a new projection unfold. This real-time interaction allows for an iterative dialogue between human ideas and AI-generated outcomes. Participants can inject local knowledge or voice community values, and the AI will incorporate that input to revise the scenario. The true power of generative AI in a decision theater lies in this collaboration.

Such interactive environments enhance learning and consensus-building. When stakeholders jointly witness how certain choices lead to undesirable futures (for instance, a policy leading to water shortages in a simulation), it can galvanize agreement on preventative action. Moreover, the theater setup encourages asking “What if?” in a safe sandbox, including ethically fraught questions. Because the visualizations make outcomes concrete, they naturally prompt ethical deliberation: If one scenario shows economic growth but high social inequality, is that future acceptable? If not, how can we tweak inputs to produce a more equitable outcome? In this way, decision theaters embed ethical and social considerations into high-tech planning, ensuring that the focus isn’t just on what is likely or profitable but on what is desirable for communities. This participatory approach helps balance technological possibilities with human values and cultural sensitivities. It’s one thing for an AI to suggest an optimal solution on paper; it’s another to have community representatives in the room, engaging with that suggestion and shaping it to fit local norms and needs.

Equally important, decision theaters democratize foresight. They open up complex decision-making processes to diverse stakeholders, not just technical experts. City planners, elected officials, citizens’ groups, and subject matter specialists can all contribute in real time, aided by AI. This inclusive model guards against the risk of AI becoming an opaque oracle controlled by a few. Instead, the AI’s insights are put on display for all to scrutinize and question. By doing so, the process builds trust in the tools and the decisions that come out of them. When people see that an AI’s recommendation emerged from transparent, interactive exploration – rather than a mysterious black box – they may be more likely to trust and accept the outcome. As one policy observer noted, it’s essential to bring ideas from across sectors and disciplines into these AI-assisted discussions so that solutions “work for people, not just companies.” If designed well, decision theaters operationalize that principle…(More)”.

Mind the (Language) Gap: Mapping the Challenges of LLM Development in Low-Resource Language Contexts


White Paper by the Stanford Institute for Human-Centered AI (HAI), the Asia Foundation and the University of Pretoria: “…maps the LLM development landscape for low-resource languages, highlighting challenges, trade-offs, and strategies to increase investment; prioritize cross-disciplinary, community-driven development; and ensure fair data ownership…

  • Large language model (LLM) development suffers from a digital divide: Most major LLMs underperform for non-English—and especially low-resource—languages; are not attuned to relevant cultural contexts; and are not accessible in parts of the Global South.
  • Low-resource languages (such as Swahili or Burmese) face two crucial limitations: a scarcity of labeled and unlabeled language data and poor quality data that is not sufficiently representative of the languages and their sociocultural contexts.
  • To bridge these gaps, researchers and developers are exploring different technical approaches to developing LLMs that better perform for and represent low-resource languages but come with different trade-offs:
    • Massively multilingual models, developed primarily by large U.S.-based firms, aim to improve performance for more languages by including a wider range of (100-plus) languages in their training datasets.
    • Regional multilingual models, developed by academics, governments, and nonprofits in the Global South, use smaller training datasets made up of 10-20 low-resource languages to better cater to and represent a smaller group of languages and cultures.
    • Monolingual or monocultural models, developed by a variety of public and private actors, are trained on or fine-tuned for a single low-resource language and thus tailored to perform well for that language…(More)”

Deliberative Approaches to Inclusive Governance


Series edited by Taylor Owen and Sequoia Kim: “Democracy has undergone profound changes over the past decade, shaped by rapid technological, social, and political transformations. Across the globe, citizens are demanding more meaningful and sustained engagement in governance—especially around emerging technologies like artificial intelligence (AI), which increasingly shape the contours of public life.

From world-leading experts in deliberative democracy, civic technology, and AI governance we introduce a seven-part essay series exploring how deliberative democratic processes like citizen’s assemblies and civic tech can strengthen AI governance…(More)”.

Open with care: transparency and data sharing in civically engaged research


Paper by Ankushi Mitra: “Research transparency and data access are considered increasingly important for advancing research credibility, cumulative learning, and discovery. However, debates persist about how to define and achieve these goals across diverse forms of inquiry. This article intervenes in these debates, arguing that the participants and communities with whom scholars work are active stakeholders in science, and thus have a range of rights, interests, and researcher obligations to them in the practice of transparency and openness. Drawing on civically engaged research and related approaches that advocate for subjects of inquiry to more actively shape its process and share in its benefits, I outline a broader vision of research openness not only as a matter of peer scrutiny among scholars or a top-down exercise in compliance, but rather as a space for engaging and maximizing opportunities for all stakeholders in research. Accordingly, this article provides an ethical and practical framework for broadening transparency, accessibility, and data-sharing and benefit-sharing in research. It promotes movement beyond open science to a more inclusive and socially responsive science anchored in a larger ethical commitment: that the pursuit of knowledge be accountable and its benefits made accessible to the citizens and communities who make it possible…(More)”.

Artificial Intelligence and Big Data


Book edited by Frans L. Leeuw and Michael Bamberger: “…explores how Artificial Intelligence (AI) and Big Data contribute to the evaluation of the rule of law (covering legal arrangements, empirical legal research, law and technology, and international law), and social and economic development programs in both industrialized and developing countries. Issues of ethics and bias in the use of AI are also addressed and indicators of the growth of knowledge in the field are discussed.

Interdisciplinary and international in scope, and bringing together leading academics and practitioners from across the globe, the book explores the applications of AI and big data in Rule of Law and development evaluation, identifies differences in the approaches used in the two fields, and how each could learn from the approaches used in the other, as well as differences in the AI-related issues addressed in industrialized nations compared to those addressed in Africa and Asia.

Artificial Intelligence and Big Data is an essential read for researchers, academics and students working in the fields of Rule of Law and Development, and researchers in institutions working on new applications in AI will all benefit from the book’s practical insights…(More)”.

Decision Making under Deep Uncertainty and the Great Acceleration


Paper by Robert J. Lempert: “Seventy-five years into the Great Acceleration—a period marked by unprecedented growth in human activity and its effects on the planet—some type of societal transformation is inevitable. Successfully navigating these tumultuous times requires scientific, evidence-based information as an input into society’s value-laden decisions at all levels and scales. The methods and tools most commonly used to bring such expert knowledge to policy discussions employ predictions of the future, which under the existing conditions of complexity and deep uncertainty can often undermine trust and hinder good decisions. How, then, should experts best inform society’s attempts to navigate when both experts and decisionmakers are sure to be surprised? Decision Making under Deep Uncertainty (DMDU) offers an answer to this question. With its focus on model pluralism, learning, and robust solutions coproduced in a participatory process of deliberation with analysis, DMDU can repair the fractured conversations among policy experts, decisionmakers, and the public. In this paper, the author explores how DMDU can reshape policy analysis to better align with the demands of a rapidly evolving world and offers insights into the roles and opportunities for experts to inform societal debates and actions toward more-desirable futures…(More)”.

Who Owns Science?


Article by Lisa Margonelli: “Only a few months into 2025, the scientific enterprise is reeling from a series of shocks—mass firings of the scientific workforce across federal agencies, cuts to federal research budgets, threats to indirect costs for university research, proposals to tax endowments, termination of federal science advisory committees, and research funds to prominent universities held hostage over political conditions. Amid all this, the public has not shown much outrage at—or even interest in—the dismantling of the national research project that they’ve been bankrolling for the past 75 years.

Some evidence of a disconnect from the scientific establishment was visible in confirmation hearings of administration appointees. During his Senate nomination hearing to head the department of Health and Human Services, Robert F. Kennedy Jr. promised a reorientation of research from infectious disease toward chronic conditions, along with “radical transparency” to rebuild trust in science. While his fans applauded, he insisted that he was not anti-vaccine, declaring, “I am pro-safety.”

But lack of public reaction to funding cuts need not be pinned on distrust of science; it could simply be that few citizens see the $200-billion-per-year, envy-of-the-world scientific enterprise as their own. On March 15, Alabama meteorologist James Spann took to Facebook to narrate the approach of 16 tornadoes in the state, taking note that people didn’t seem to care about the president’s threat to close the National Weather Service. “People say, ‘Well, if they shut it down, I’ll just use my app,’” Spann told Inside Climate News. “Well, where do you think the information on your app comes from? It comes from computer model output that’s run by the National Weather Service.” The public has paid for those models for generations, but only a die-hard weather nerd can find the acronyms for the weather models that signal that investment on these apps…(More)”.

UAE set to use AI to write laws in world first


Article by Chloe Cornish: “The United Arab Emirates aims to use AI to help write new legislation and review and amend existing laws, in the Gulf state’s most radical attempt to harness a technology into which it has poured billions.

The plan for what state media called “AI-driven regulation” goes further than anything seen elsewhere, AI researchers said, while noting that details were scant. Other governments are trying to use AI to become more efficient, from summarising bills to improving public service delivery, but not to actively suggest changes to current laws by crunching government and legal data.

“This new legislative system, powered by artificial intelligence, will change how we create laws, making the process faster and more precise,” said Sheikh Mohammad bin Rashid Al Maktoum, the Dubai ruler and UAE vice-president, quoted by state media.

Ministers last week approved the creation of a new cabinet unit, the Regulatory Intelligence Office, to oversee the legislative AI push. 

Rony Medaglia, a professor at Copenhagen Business School, said the UAE appeared to have an “underlying ambition to basically turn AI into some sort of co-legislator”, and described the plan as “very bold”.

Abu Dhabi has bet heavily on AI and last year opened a dedicated investment vehicle, MGX, which has backed a $30bn BlackRock AI-infrastructure fund among other investments. MGX has also added an AI observer to its own board.

The UAE plans to use AI to track how laws affect the country’s population and economy by creating a massive database of federal and local laws, together with public sector data such as court judgments and government services.

The AI will “regularly suggest updates to our legislation,” Sheikh Mohammad said, according to state media. The government expects AI to speed up lawmaking by 70 per cent, according to the cabinet meeting readout…(More)”

For sale: Data on US servicemembers — and lots of it


Article by Alfred Ng: “Active-duty members of the U.S. military are vulnerable to having their personal information collected, packaged and sold to overseas companies without any vetting, according to a new report funded by the U.S. Military Academy at West Point.

The report highlights a significant American security risk, according to military officials, lawmakers and the experts who conducted the research, and who say the data available on servicemembers exposes them to blackmail based on their jobs and habits.

It also casts a spotlight on the practices of data brokers, a set of firms that specialize in scraping and packaging people’s digital records such as health conditions and credit ratings.

“It’s really a case of being able to target people based on specific vulnerabilities,” said Maj. Jessica Dawson, a research scientist at the Army Cyber Institute at West Point who initiated the study.

Data brokers gather government files, publicly available information and financial records into packages they can sell to marketers and other interested companies. As the practice has grown into a $214 billion industry, it has raised privacy concerns and come under scrutiny from lawmakers in Congress and state capitals.

Worried it could also present a risk to national security, the U.S. Military Academy at West Point funded the study from Duke University to see how servicemembers’ information might be packaged and sold.

Posing as buyers in the U.S. and Singapore, Duke researchers contacted multiple data-broker firms who listed datasets about active-duty servicemembers for sale. Three agreed and sold datasets to the researchers while two declined, saying the requests came from companies that didn’t meet their verification standards.

In total, the datasets contained information on nearly 30,000 active-duty military personnel. They also purchased a dataset on an additional 5,000 friends and family members of military personnel…(More)”

Spaces for Deliberation


Report by Gustav Kjær Vad Nielsen & James MacDonald-Nelson: “As citizens’ assemblies and other forms of citizen deliberation are increasingly implemented in many parts of the world, it is becoming more relevant to explore and question the role of the physical spaces in which these processes take place.

This paper builds on existing literature that considers the relationships between space and democracy. In the literature, this relationship has been studied with a focus on the architecture of parliament buildings, and on the role of urban public spaces and architecture for political culture, both largely within the context of representative democracy and with little or no attention given to spaces for facilitated citizen deliberation. With very limited considerations of the spaces for deliberative assemblies in the literature, in this paper, we argue that the spatial qualities for citizen deliberation demand more critical attention.

Through a series of interviews with leading practitioners of citizens’ assemblies from six different countrieswe explore what spatial qualities are typically considered in the planning and implementation of these assemblies, what are the recurring challenges related to the physical spaces where they take place, and the opportunities and limitations for a more intentional spatial design. In this paper, we synthesise our findings and formulate a series of considerations for the spatial qualities of citizens’ assemblies aimed at informing future practice and further research…(More)”.