Paper by Rashid Mushkani, Hugo Berard, Allison Cohen, Shin Koeski: “This paper proposes a Right to AI, which asserts that individuals and communities should meaningfully participate in the development and governance of the AI systems that shape their lives. Motivated by the increasing deployment of AI in critical domains and inspired by Henri Lefebvre’s concept of the Right to the City, we reconceptualize AI as a societal infrastructure, rather than merely a product of expert design. In this paper, we critically evaluate how generative agents, large-scale data extraction, and diverse cultural values bring new complexities to AI oversight. The paper proposes that grassroots participatory methodologies can mitigate biased outcomes and enhance social responsiveness. It asserts that data is socially produced and should be managed and owned collectively. Drawing on Sherry Arnstein’s Ladder of Citizen Participation and analyzing nine case studies, the paper develops a four-tier model for the Right to AI that situates the current paradigm and envisions an aspirational future. It proposes recommendations for inclusive data ownership, transparent design processes, and stakeholder-driven oversight. We also discuss market-led and state-centric alternatives and argue that participatory approaches offer a better balance between technical efficiency and democratic legitimacy…(More)”.
Societal and technological progress as sewing an ever-growing, ever-changing, patchy, and polychrome quilt
Paper by Joel Z. Leibo et al: “Artificial Intelligence (AI) systems are increasingly placed in positions where their decisions have real consequences, e.g., moderating online spaces, conducting research, and advising on policy. Ensuring they operate in a safe and ethically acceptable fashion is thus critical. However, most solutions have been a form of one-size-fits-all “alignment”. We are worried that such systems, which overlook enduring moral diversity, will spark resistance, erode trust, and destabilize our institutions. This paper traces the underlying problem to an often-unstated Axiom of Rational Convergence: the idea that under ideal conditions, rational agents will converge in the limit of conversation on a single ethics. Treating that premise as both optional and doubtful, we propose what we call the appropriateness framework: an alternative approach grounded in conflict theory, cultural evolution, multi-agent systems, and institutional economics. The appropriateness framework treats persistent disagreement as the normal case and designs for it by applying four principles: (1) contextual grounding, (2) community customization, (3) continual adaptation, and (4) polycentric governance. We argue here that adopting these design principles is a good way to shift the main alignment metaphor from moral unification to a more productive metaphor of conflict management, and that taking this step is both desirable and urgent…(More)”.
Co-Designing AI Systems with Value-Sensitive Citizen Science
Paper by Sachit Mahajan and Dirk Helbing: “As artificial intelligence (AI) systems increasingly shape everyday life, integrating diverse community values into their development becomes both an ethical imperative and a practical necessity. This paper introduces Value Sensitive Citizen Science (VSCS), a systematic framework combining Value Sensitive Design (VSD) principles with citizen science methods to foster meaningful public participation in AI. Addressing critical gaps in existing approaches, VSCS integrates culturally grounded participatory methods and structured cognitive scaffolding through the Participatory Value-Cognition Taxonomy (PVCT). Through iterative value-sensitive participation cycles guided by an extended scenario logic (What-if, If-then, Then-what, What-now), community members act as genuine coresearchers-identifying, translating, and operationalizing local values into concrete technical requirements. The framework also institutionalizes governance structures for ongoing oversight, adaptability, and accountability across the AI lifecycle. By explicitly bridging participatory design with algorithmic accountability, VSCS ensures that AI systems reflect evolving community priorities rather than reinforcing top-down or monocultural perspectives. Critical discussions highlight VSCS’s practical implications, addressing challenges such as power dynamics, scalability, and epistemic justice. The paper concludes by outlining actionable strategies for policymakers and practitioners, alongside future research directions aimed at advancing participatory, value-driven AI development across diverse technical and sociocultural contexts…(More)”.
The RRI Citizen Review Panel: a public engagement method for supporting responsible territorial policymaking
Paper by Maya Vestergaard Bidstrup et al: “Responsible Territorial Policymaking incorporates the main principles of Responsible Research and Innovation (RRI) into the policymaking process, making it well-suited for guiding the development of sustainable and resilient territorial policies that prioritise societal needs. As a cornerstone in RRI, public engagement plays a central role in this process, underscoring the importance of involving all societal actors to align outcomes with the needs, expectations, and values of society. In the absence of existing methods to gather sufficiently and effectively the citizens’ review of multiple policies at a territorial level, the RRI Citizen Review Panel is a new public engagement method developed to facilitate citizens’ review and validation of territorial policies. By using RRI as an analytical framework, this paper examines whether the RRI Citizen Review Panel can support Responsible Territorial Policymaking, not only by incorporating citizens’ perspectives into territorial policymaking, but also by making policies more responsible. The paper demonstrates that in the review of territorial policies, citizens are adding elements of RRI to a wide range of policies within different policy areas, contributing to making policies more responsible. Consequently, the RRI Citizen Review Panel emerges as a valuable tool for policymakers, enabling them to gather citizen perspectives and imbue policies with a heightened sense of responsibility…(More)”.
Playing for science: Designing science games
Paper by Claudio M Radaelli: “How can science have more impact on policy decisions? The P-Cube Project has approached this question by creating five pedagogical computer games based on missions given to a policy entrepreneur (the player) advocating for science-informed policy decisions. The player explores simplified strategies for policy change rooted in a small number of variables, thus making it possible to learn without a prior background in political science or public administration. The games evolved from the intuition that, instead of making additional efforts to explain science to decision-makers, we should directly empower would-be scientists (our primary audience for the games), post-graduates in public policy and administration, and activists for science. The two design principles of the games revolve around learning about how policy decisions are made (a learning-about-content principle) and reflection. Indeed, the presence of science in the policy process raises ethical and normative decisions, especially when we consider controversial strategies like civil disobedience and alliances with industry. To be on the side of science does not mean to be outside society and politics. I show the motivation, principles, scripts and pilots of the science games, reflecting on how they can be used and for what reasons…(More)”
Updating purpose limitation for AI: a normative approach from law and philosophy
Paper by Rainer Mühlhoff and Hannah Ruschemeier: “The purpose limitation principle goes beyond the protection of the individual data subjects: it aims to ensure transparency, fairness and its exception for privileged purposes. However, in the current reality of powerful AI models, purpose limitation is often impossible to enforce and is thus structurally undermined. This paper addresses a critical regulatory gap in EU digital legislation: the risk of secondary use of trained models and anonymised training datasets. Anonymised training data, as well as AI models trained from this data, pose the threat of being freely reused in potentially harmful contexts such as insurance risk scoring and automated job applicant screening. We propose shifting the focus of purpose limitation from data processing to AI model regulation. This approach mandates that those training AI models define the intended purpose and restrict the use of the model solely to this stated purpose…(More)”.
Rebooting the global consensus: Norm entrepreneurship, data governance and the inalienability of digital bodies
Paper by Siddharth Peter de Souza and Linnet Taylor: “The establishment of norms among states is a common way of governing international actions. This article analyses the potential of norm-building for governing data and artificial intelligence technologies’ collective effects. Rather than focusing on state actors’s ability to establish and enforce norms, however, we identify a contrasting process taking place among civil society organisations in response to the international neoliberal consensus on the commodification of data. The norm we identify – ‘nothing about us without us’ – asserts civil society’s agency, and specifically the right of those represented in datasets to give or refuse permission through structures of democratic representation. We argue that this represents a form of norm-building that should be taken as seriously as that of states, and analyse how it is constructing the political power, relations, and resources to engage in governing technology at scale. We first outline how this counter-norming is anchored in data’s connections to bodies, land, community, and labour. We explore the history of formal international norm-making and the current norm-making work being done by civil society organisations internationally, and argue that these, although very different in their configurations and strategies, are comparable in scale and scope. Based on this, we make two assertions: first, that a norm-making lens is a useful way for both civil society and research to frame challenges to the primacy of market logics in law and governance, and second, that the conceptual exclusion of civil society actors as norm-makers is an obstacle to the recognition of counter-power in those spheres…(More)”.
Mapping local knowledge supports science and stewardship
Paper by Sarah C. Risley, Melissa L. Britsch, Joshua S. Stoll & Heather M. Leslie: “Coastal marine social–ecological systems are experiencing rapid change. Yet, many coastal communities are challenged by incomplete data to inform collaborative research and stewardship. We investigated the role of participatory mapping of local knowledge in addressing these challenges. We used participatory mapping and semi-structured interviews to document local knowledge in two focal social–ecological systems in Maine, USA. By co-producing fine-scale characterizations of coastal marine social–ecological systems, highlighting local questions and needs, and generating locally relevant hypotheses on system change, our research demonstrates how participatory mapping and local knowledge can enhance decision-making capacity in collaborative research and stewardship. The results of this study directly informed a collaborative research project to document changes in multiple shellfish species, shellfish predators, and shellfish harvester behavior and other human activities. This research demonstrates that local knowledge can be a keystone component of collaborative social–ecological systems research and community-lead environmental stewardship…(More)”.
Make privacy policies longer and appoint LLM readers
Paper by Przemysław Pałka et al: “In a world of human-only readers, a trade-off persists between comprehensiveness and comprehensibility: only privacy policies too long to be humanly readable can precisely describe the intended data processing. We argue that this trade-off no longer exists where LLMs are able to extract tailored information from clearly-drafted fully-comprehensive privacy policies. To substantiate this claim, we provide a methodology for drafting comprehensive non-ambiguous privacy policies and for querying them using LLMs prompts. Our methodology is tested with an experiment aimed at determining to what extent GPT-4 and Llama2 are able to answer questions regarding the content of privacy policies designed in the format we propose. We further support this claim by analyzing real privacy policies in the chosen market sectors through two experiments (one with legal experts, and another by using LLMs). Based on the success of our experiments, we submit that data protection law should change: it must require controllers to provide clearly drafted, fully comprehensive privacy policies from which data subjects and other actors can extract the needed information, with the help of LLMs…(More)”.
Inquiry as Infrastructure: Defining Good Questions in the Age of Data and AI
Paper by Stefaan Verhulst: “The most consequential failures in data-driven policymaking and AI deployment often stem not from poor models or inadequate datasets but from poorly framed questions. This paper centers question literacy as a critical yet underdeveloped competency in the data and policy landscape. Arguing for a “new science of questions,” it explores what constitutes a good question-one that is not only technically feasible but also ethically grounded, socially legitimate, and aligned with real-world needs. Drawing on insights from The GovLab’s 100 Questions Initiative, the paper develops a taxonomy of question types-descriptive, diagnostic, predictive, and prescriptive-and identifies five essential criteria for question quality: questions must be general yet concrete, co-designed with affected communities and domain experts, purpose-driven and ethically sound, grounded in data and technical realities, and capable of evolving through iterative refinement. The paper also outlines common pathologies of bad questions, such as vague formulation, biased framing, and solution-first thinking. Rather than treating questions as incidental to analysis, it argues for institutionalizing deliberate question design through tools like Q-Labs, question maturity models, and new professional roles for data stewards. Ultimately, the paper contends that the questions are infrastructures of meaning. What we ask shapes not only what data we collect or what models we build but also what values we uphold and what futures we make possible…(More)”.
