Playing for science: Designing science games


Paper by Claudio M Radaelli: “How can science have more impact on policy decisions? The P-Cube Project has approached this question by creating five pedagogical computer games based on missions given to a policy entrepreneur (the player) advocating for science-informed policy decisions. The player explores simplified strategies for policy change rooted in a small number of variables, thus making it possible to learn without a prior background in political science or public administration. The games evolved from the intuition that, instead of making additional efforts to explain science to decision-makers, we should directly empower would-be scientists (our primary audience for the games), post-graduates in public policy and administration, and activists for science. The two design principles of the games revolve around learning about how policy decisions are made (a learning-about-content principle) and reflection. Indeed, the presence of science in the policy process raises ethical and normative decisions, especially when we consider controversial strategies like civil disobedience and alliances with industry. To be on the side of science does not mean to be outside society and politics. I show the motivation, principles, scripts and pilots of the science games, reflecting on how they can be used and for what reasons…(More)”

Updating purpose limitation for AI: a normative approach from law and philosophy 


Paper by Rainer Mühlhoff and Hannah Ruschemeier: “The purpose limitation principle goes beyond the protection of the individual data subjects: it aims to ensure transparency, fairness and its exception for privileged purposes. However, in the current reality of powerful AI models, purpose limitation is often impossible to enforce and is thus structurally undermined. This paper addresses a critical regulatory gap in EU digital legislation: the risk of secondary use of trained models and anonymised training datasets. Anonymised training data, as well as AI models trained from this data, pose the threat of being freely reused in potentially harmful contexts such as insurance risk scoring and automated job applicant screening. We propose shifting the focus of purpose limitation from data processing to AI model regulation. This approach mandates that those training AI models define the intended purpose and restrict the use of the model solely to this stated purpose…(More)”.

Rebooting the global consensus: Norm entrepreneurship, data governance and the inalienability of digital bodies


Paper by Siddharth Peter de Souza and Linnet Taylor: “The establishment of norms among states is a common way of governing international actions. This article analyses the potential of norm-building for governing data and artificial intelligence technologies’ collective effects. Rather than focusing on state actors’s ability to establish and enforce norms, however, we identify a contrasting process taking place among civil society organisations in response to the international neoliberal consensus on the commodification of data. The norm we identify – ‘nothing about us without us’ – asserts civil society’s agency, and specifically the right of those represented in datasets to give or refuse permission through structures of democratic representation. We argue that this represents a form of norm-building that should be taken as seriously as that of states, and analyse how it is constructing the political power, relations, and resources to engage in governing technology at scale. We first outline how this counter-norming is anchored in data’s connections to bodies, land, community, and labour. We explore the history of formal international norm-making and the current norm-making work being done by civil society organisations internationally, and argue that these, although very different in their configurations and strategies, are comparable in scale and scope. Based on this, we make two assertions: first, that a norm-making lens is a useful way for both civil society and research to frame challenges to the primacy of market logics in law and governance, and second, that the conceptual exclusion of civil society actors as norm-makers is an obstacle to the recognition of counter-power in those spheres…(More)”.

Mapping local knowledge supports science and stewardship


Paper by Sarah C. Risley, Melissa L. Britsch, Joshua S. Stoll & Heather M. Leslie: “Coastal marine social–ecological systems are experiencing rapid change. Yet, many coastal communities are challenged by incomplete data to inform collaborative research and stewardship. We investigated the role of participatory mapping of local knowledge in addressing these challenges. We used participatory mapping and semi-structured interviews to document local knowledge in two focal social–ecological systems in Maine, USA. By co-producing fine-scale characterizations of coastal marine social–ecological systems, highlighting local questions and needs, and generating locally relevant hypotheses on system change, our research demonstrates how participatory mapping and local knowledge can enhance decision-making capacity in collaborative research and stewardship. The results of this study directly informed a collaborative research project to document changes in multiple shellfish species, shellfish predators, and shellfish harvester behavior and other human activities. This research demonstrates that local knowledge can be a keystone component of collaborative social–ecological systems research and community-lead environmental stewardship…(More)”.

Make privacy policies longer and appoint LLM readers


Paper by Przemysław Pałka et al: “In a world of human-only readers, a trade-off persists between comprehensiveness and comprehensibility: only privacy policies too long to be humanly readable can precisely describe the intended data processing. We argue that this trade-off no longer exists where LLMs are able to extract tailored information from clearly-drafted fully-comprehensive privacy policies. To substantiate this claim, we provide a methodology for drafting comprehensive non-ambiguous privacy policies and for querying them using LLMs prompts. Our methodology is tested with an experiment aimed at determining to what extent GPT-4 and Llama2 are able to answer questions regarding the content of privacy policies designed in the format we propose. We further support this claim by analyzing real privacy policies in the chosen market sectors through two experiments (one with legal experts, and another by using LLMs). Based on the success of our experiments, we submit that data protection law should change: it must require controllers to provide clearly drafted, fully comprehensive privacy policies from which data subjects and other actors can extract the needed information, with the help of LLMs…(More)”.

Inquiry as Infrastructure: Defining Good Questions in the Age of Data and AI


Paper by Stefaan Verhulst: “The most consequential failures in data-driven policymaking and AI deployment often stem not from poor models or inadequate datasets but from poorly framed questions. This paper centers question literacy as a critical yet underdeveloped competency in the data and policy landscape. Arguing for a “new science of questions,” it explores what constitutes a good question-one that is not only technically feasible but also ethically grounded, socially legitimate, and aligned with real-world needs. Drawing on insights from The GovLab’s 100 Questions Initiative, the paper develops a taxonomy of question types-descriptive, diagnostic, predictive, and prescriptive-and identifies five essential criteria for question quality: questions must be general yet concrete, co-designed with affected communities and domain experts, purpose-driven and ethically sound, grounded in data and technical realities, and capable of evolving through iterative refinement. The paper also outlines common pathologies of bad questions, such as vague formulation, biased framing, and solution-first thinking. Rather than treating questions as incidental to analysis, it argues for institutionalizing deliberate question design through tools like Q-Labs, question maturity models, and new professional roles for data stewards. Ultimately, the paper contends that the questions are infrastructures of meaning. What we ask shapes not only what data we collect or what models we build but also what values we uphold and what futures we make possible…(More)”.

Guiding the provision of quality policy advice: the 5D model


Paper by Christopher Walker and Sally Washington: “… presents a process model to guide the production of quality policy advice. The work draws on engagement with both public sector practitioners and academics to design a process model for the development of policy advice that works in practice (can be used by policy professionals in their day-to-day work) and aligns with theory (can be taught as part of explaining the dynamics of a wider policy advisory system). The 5D Model defines five key domains of inquiry: understanding Demand, being open to Discovery, undertaking Design, identifying critical Decision points, and shaping advice to enable Delivery. Our goal is a ‘repeatable, scalable’ model for supporting policy practitioners to provide quality advice to decision makers. The model was developed and tested through an extensive process of engagement with senior policy practitioners who noted the heuristic gave structure to practices that determine how policy advice is organized and formulated. Academic colleagues confirmed the utility of the model for explaining and teaching how policy is designed and delivered within the context of a wider policy advisory system (PAS). A unique aspect of this work was the collaboration and shared interest amongst academics and practitioners to define a model that is ‘useful for teaching’ and ‘useful for doing’…(More)”.

Open with care: transparency and data sharing in civically engaged research


Paper by Ankushi Mitra: “Research transparency and data access are considered increasingly important for advancing research credibility, cumulative learning, and discovery. However, debates persist about how to define and achieve these goals across diverse forms of inquiry. This article intervenes in these debates, arguing that the participants and communities with whom scholars work are active stakeholders in science, and thus have a range of rights, interests, and researcher obligations to them in the practice of transparency and openness. Drawing on civically engaged research and related approaches that advocate for subjects of inquiry to more actively shape its process and share in its benefits, I outline a broader vision of research openness not only as a matter of peer scrutiny among scholars or a top-down exercise in compliance, but rather as a space for engaging and maximizing opportunities for all stakeholders in research. Accordingly, this article provides an ethical and practical framework for broadening transparency, accessibility, and data-sharing and benefit-sharing in research. It promotes movement beyond open science to a more inclusive and socially responsive science anchored in a larger ethical commitment: that the pursuit of knowledge be accountable and its benefits made accessible to the citizens and communities who make it possible…(More)”.

Decision Making under Deep Uncertainty and the Great Acceleration


Paper by Robert J. Lempert: “Seventy-five years into the Great Acceleration—a period marked by unprecedented growth in human activity and its effects on the planet—some type of societal transformation is inevitable. Successfully navigating these tumultuous times requires scientific, evidence-based information as an input into society’s value-laden decisions at all levels and scales. The methods and tools most commonly used to bring such expert knowledge to policy discussions employ predictions of the future, which under the existing conditions of complexity and deep uncertainty can often undermine trust and hinder good decisions. How, then, should experts best inform society’s attempts to navigate when both experts and decisionmakers are sure to be surprised? Decision Making under Deep Uncertainty (DMDU) offers an answer to this question. With its focus on model pluralism, learning, and robust solutions coproduced in a participatory process of deliberation with analysis, DMDU can repair the fractured conversations among policy experts, decisionmakers, and the public. In this paper, the author explores how DMDU can reshape policy analysis to better align with the demands of a rapidly evolving world and offers insights into the roles and opportunities for experts to inform societal debates and actions toward more-desirable futures…(More)”.

Democratic Resilience: Moving from Theoretical Frameworks to a Practical Measurement Agenda


Paper by Nicholas Biddle, Alexander Fischer, Simon D. Angus, Selen Ercan, Max Grömping, andMatthew Gray: “Global indices and media narratives indicate a decline in democratic institutions, values, and practices. Simultaneously, democratic innovators are experimenting with new ways to strengthen democracy at local and national levels. These both suggest democracies are not static; they evolve as society, technology and the environment change.

This paper examines democracy as a resilient system, emphasizing the role of applied analysis in shaping effective policy and programs, particularly in Australia. Grounded in adaptive processes, democratic resilience is the capacity of a democracy to identify problems, and collectively respond to changing conditions, balancing institutional stability with transformative. It outlines the ambition of a national network of scholars, civil society leaders, and policymakers to equip democratic innovators with practical insights and foresight underpinning new ideas. These insights are essential for strengthening both public institutions, public narratives and community programs.

We review current literature on resilient democracies and highlight a critical gap: current measurement efforts focus heavily on composite indices—especially trust—while neglecting dynamic flows and causal drivers. They focus on the descriptive features and identify weaknesses, they do not focus on the diagnostics or evidence to what strengths democracies. This is reflected in the lack of cross-sector networked, living evidence systems to track what works and why across the intersecting dynamics of democratic practices. To address this, we propose a practical agenda centred on three core strengthening flows of democratic resilience: trusted institutions, credible information, and social inclusion.

The paper reviews six key data sources and several analytic methods for continuously monitoring democratic institutions, diagnosing causal drivers, and building an adaptive evidence system to inform innovation and reform. By integrating resilience frameworks and policy analysis, we demonstrate how real-time monitoring and analysis can enable innovation, experimentation and cross-sector ingenuity.

This article presents a practical research agenda connecting a national network of scholars and civil society leaders. We suggest this agenda be problem-driven, facilitated by participatory approaches to asking and prioritising the questions that matter most. We propose a connected approach to collectively posing key questions that matter most, expanding data sources, and fostering applied ideation between communities, civil society, government, and academia—ensuring democracy remains resilient in an evolving global and national context…(More)”.