Stefaan Verhulst
About: “Stalled progress on many of the most pressing challenges facing our nation stems not from failure of will, but from pervasive stasis in government. There is an increasingly obvious mismatch between “wicked” modern problems and the aging institutions and regulatory strategies we rely on to solve them. Public trust in government is in the basement as a result.
The FAS Center for Regulatory Ingenuity (CRI) is building a new, transpartisan vision of government that works – that achieves ambitious goals while adeptly responding to people’s basic, everyday needs. CRI does this by (1) by creating high-trust environments to brainstorm and refine the big ideas that will breathe new life into government institutions and intersecting democratic feedback loops, and (2) building a “network of networks” that supports policymakers and practitioners in implementing those ideas at scale.
CRI’s initial focus is on climate policy: a space where mismatches between the tools we have and tools we need are particularly apparent. Foundational environmental laws like the Clean Air Act and Clean Water Act were designed to curb industrial pollution, not guide the society-wide economic transition to clean technologies that’s underway, and the systems for democratic participation and government capacity are equally out of sync with our most pressing needs and opportunities.
Successfully navigating this transition means seriously considering how we can update 20th century laws for a 21st century world, better coupling regulatory and non-regulatory approaches, and focusing on solutions that can deliver near-term benefits while building momentum for more ambitious national reforms.
CRI is bringing the climate and state capacity communities together to do just that. The last thing we need in the face of big challenges is stasis. It’s time to move boldly towards a government Americans trust to deliver…(More)”.
Book by Gerald Zaltman: “…presents six techniques to tap into the creative power of the unconscious: serious playfulness, befriending ignorance, asking the right discovery questions, chasing your curiosity, panoramic thinking, and using the “voyager outlook.” These research-based techniques improve decision-making and go beyond the existing literature on “thinking smarter.” This book’s insights emerge from a large number of one-on-one in-depth interviews with senior leaders around the globe, reinforced with research findings from scientific literatures.
Mirroring Zaltman’s Harvard Business School classroom practice, each chapter opens with a practical-thinking exercise that helps readers surface the mental processes and biases that unconsciously close minds and constrict thinking. This creative surfacing is the crucial foundation for any leader operating in a complex, uncertain environment, who needs unconventional solutions to challenging problems…(More)”.
Essay by Julien Lie-Panis: “Every human society, from the smallest village to the largest nation, faces the same fundamental challenge: how to get people to act in the interests of the collective rather than their own. Fishermen must limit their catch so fish stocks don’t collapse. People must respect others’ property and safety. Citizens must pay taxes to fund roads, schools and hospitals. Left to pure self-interest, no community could endure; the bonds of collective life would quickly unravel.
The solutions we’ve devised are remarkably similar across cultures and centuries. We create rules. Then we appoint guardians to enforce them. Those who break the rules are punished. But there’s a problem with this approach, one that the Roman poet Juvenal identified nearly 2,000 years ago: Quis custodiet ipsos custodes? Who will guard the guards themselves?
Fisheries appoint monitors to prevent overfishing – but what if the monitors accept bribes to look the other way? Police officers exist to protect everyone’s property and safety – but who ensures that they don’t abuse their power? Governments collect taxes for public services – but how do we stop officials from diverting the funds to their own accounts?
Every institution faces the same fundamental paradox. Institutions foster cooperation by rewarding good behaviour and punishing rule-breakers. Yet they themselves depend on cooperative members to function. We haven’t solved the cooperation problem – we’ve simply moved it back one step. So why do institutions work at all? To understand this puzzle, we need to first ask what makes human cooperation so extraordinary in the natural world.
Cooperation is everywhere in nature. Walk through any forest, peer into any tide pool, observe any meadow, and you’ll witness countless partnerships that seem to defy the brutal logic of natural selection. Far from being mysterious, these alliances follow predictable patterns that evolutionary biologists have come to understand well. A handful of basic mechanisms explain cooperation from ant colonies to coral reefs: kinship, reciprocity and reputation…(More)”.
Paper by Mark Cabaj and Brent Wellsch with Keren Perla: “..offers a sharper way to understand the different types of portfolios that changemakers can employ, challenging the common tendency to treat all portfolio approaches as interchangeable. It outlines how distinct portfolio types embody different purposes, theories of change, management practices, and evaluation needs. The piece concludes with a clear call to action: practitioners must be explicit and intentional about the kind of portfolio they are designing, because different portfolio types support different pathways— and possibilities— for transformation…(More)”.
Chapter by Hossein Hassani & Steve Macfeely: “The rapid evolution and widespread deployment of Artificial Intelligence (AI) necessitate robust standardization to ensure AI systems’ reliability, fairness, and safety. This chapter explores the critical role of data labeling and testing in the global standardization of AI. Accurate data labeling forms the backbone of AI development, providing the foundation for high-quality, unbiased, and scalable AI models. Rigorous testing frameworks are essential to validate AI performance, security, and ethical compliance. This chapter discusses the challenges and advancements in these areas, highlighting their impact on global AI deployment. Through case studies and an exploration of future directions, we emphasize the importance of continuous improvement and international collaboration in AI standardization, aiming to foster trust, interoperability, and innovation in AI technologies. The specific goal of this chapter is to explore how data labeling and testing contribute to the global standardization of Artificial Intelligence (AI) systems. The chapter addresses key research questions, such as: How do data labeling and testing practices affect AI reliability, fairness, and safety? What challenges exist in standardizing these practices across various industries? Furthermore, it aims to highlight the role of standardization in fostering international collaboration and ensuring the ethical deployment of AI technologies worldwide. This chapter reveals that robust data labeling and comprehensive testing frameworks are critical for ensuring the ethical, reliable, and scalable deployment of AI systems globally. A key contribution of this work is its emphasis on the need for standardized practices to mitigate biases and improve interoperability, fostering greater international collaboration and trust in AI technologies…(More)”.
Article by Beatriz Botero Arcila, Pedro Ramaciotti, and Emma Cabale: “The Digital Services Act (DSA) is the most ambitious effort taken by liberal democratic nations to regulate social media platforms. One of the main ways it does this is by establishing various transparency obligations applicable to all platforms and search engines, as well as a specific transparency and data-sharing regime for the largest platforms and search engines, defined by the number of users. Specifically, Article 40 of the DSA grants vetted researchers access to data “for the sole purpose of conducting research that contributes to the detection, identification and understanding of systemic risks in the Union.” This article argues that Article 40’s requirement for the requested data to be “necessary and proportionate” to conduct a specific type of research may hinder effective research. Indeed, researchers have been denied broad access – or any access to data – on privacy or confidentiality grounds before (Bontcheva, 2024). Consequently, researchers have limited prior knowledge of social media impacts and thus may have limited knowledge about what data, exactly, they need. Drawing on cutting-edge social media research, we explain why vetted researchers may thus need broad access to social media data to meet the objectives of the DSA. More specifically, we argue that researchers need access to system-level data, meaning data that captures how an entire digital system or platform operates, not just the activity of individual users. Consequently, we propose an interpretation of the DSA’s data request requirements that would enable researchers to study the online political landscape of EU member states effectively…(More)” See also: Article 40(4-11) DSA: Guidance for Applicants
Article by Brian X. Chen: “This month, a federal judge ruled that a man’s conversations with Anthropic’s Claude chatbot were not protected by attorney-client privilege even though he had used the chatbot to prepare to talk with lawyers.
Two weeks ago, Ring, the Amazon-owned maker of doorbell cameras, provoked widespread outrage when it aired a Super Bowl ad showing how artificial intelligence could be used to find lost dogs. Critics quickly noted that it could also be used to monitor an entire neighborhood. The company has been on an apology tour ever since.
And over the past week, news surfaced that OpenAI, the company behind ChatGPT, had been aware of a British Columbia woman’s interactions with the chatbot and considered reporting her to the authorities months before she committed a mass shooting.
While OpenAI faces questions about whether it should have been more proactive about reporting what she wrote, the incident highlighted the possibility that A.I. companies will be under more pressure to share private chat logs with the authorities…(More)”
“A Data-Driven Framework for K-12 in the Age of AI” by The Burning Glass Institute: “AI is reshaping which skills matter for professional success—and, therefore, what students need to learn. This report provides a data-driven framework for K-12 educators to navigat this shift, analyzing AI’s impact on 1,000 workforce skills and mapping the implications for 140 high school learning objectives. It offers a clear method for identifying where and how curriculum needs to be rebalanced. Three key themes emerge:

The cognitive bar is rising. Skills with high automation exposure now demand deeper conceptual understanding, not less, to empower students to direct and evaluate AI tools.
The real divide is within subjects, not between them. No discipline loses relevance, but every discipline contains skills that require new instructional approaches alongside skills that remain foundational.
Traditional assessment faces new challenges. When AI can generate polished outputs, evaluation must focus on the students’ thinking, reasoning, and judgment.
The report introduces a four-quadrant framework—Deepen, Transform, Streamline, Anchor—that helps educators make evidence-based decisions about what to emphasize, what to redesign, and what to protect in their curriculum…(More)”.
Report for European Parliament: “This study was prepared at the request of the European Parliament’s Committee on the Internal Market and Consumer Protection (IMCO). It analyses the European Commission’s Digital Omnibus package proposals published on 19 November 2025, distinguishing administrative simplification from more substantive recalibration of safeguards across data, privacy, cybersecurity and artificial intelligence areas. The study highlights key areas of controversy (legal certainty, enforcement capacity, and impacts on rights) and sets out areas for consideration for parliamentary scrutiny….(More)”
Paper by Cecilie Steenbuch Traberg, Jon Roozenbeek & Sander van der Linden: “Conferences, journals, and funding calls in the social and behavioural sciences are increasingly dominated by (generative) AI. Many academics have rebranded themselves as “AI researchers”. Every project finds its “AI angle.” This shift is understandable and important: generative AI is a consequential technological development, and psychologists and behavioural scientists are well-positioned to examine its impacts. But this focus is becoming all-encompassing. The New Yorker recently argued that AI is “homogenizing our thoughts”: that by repeatedly surfacing the most probable continuations of human thought, these systems are nudging human reasoning toward conformity. Ironically, scientific culture is drifting toward a meta-version of that claim. While earlier work warned that increasing AI-adoption may lead to a scientific monoculture, empirical evidence now suggests this process is underway. In studying AI, research practices are themselves becoming more uniform – converging not only in what is studied, but in how questions are framed, investigated, and evaluated. Understanding this convergence as a feedback loop rather than an unavoidable trend opens the possibility of targeted interventions to preserve scientific diversity before monocropping becomes fully entrenched…(More)”.