How we think about protecting data


Article by Peter Dizikes: “How should personal data be protected? What are the best uses of it? In our networked world, questions about data privacy are ubiquitous and matter for companies, policymakers, and the public.

A new study by MIT researchers adds depth to the subject by suggesting that people’s views about privacy are not firmly fixed and can shift significantly, based on different circumstances and different uses of data.

“There is no absolute value in privacy,” says Fabio Duarte, principal research scientist in MIT’s Senseable City Lab and co-author of a new paper outlining the results. “Depending on the application, people might feel use of their data is more or less invasive.”

The study is based on an experiment the researchers conducted in multiple countries using a newly developed game that elicits public valuations of data privacy relating to different topics and domains of life.

“We show that values attributed to data are combinatorial, situational, transactional, and contextual,” the researchers write.

The open-access paper, “Data Slots: tradeoffs between privacy concerns and benefits of data-driven solutions,” is published today in Nature: Humanities and Social Sciences Communications. The authors are Martina Mazzarello, a postdoc in the Senseable City Lab; Duarte; Simone Mora, a research scientist at Senseable City Lab; Cate Heine PhD ’24 of University College London; and Carlo Ratti, director of the Senseable City Lab.

The study is based around a card game with poker-type chips the researchers created to study the issue, called Data Slots. In it, players hold hands of cards with 12 types of data — such as a personal profile, health data, vehicle location information, and more — that relate to three types of domains where data are collected: home life, work, and public spaces. After exchanging cards, the players generate ideas for data uses, then assess and invest in some of those concepts. The game has been played in-person in 18 different countries, with people from another 74 countries playing it online; over 2,000 individual player-rounds were included in the study…(More)”.

Activating citizens: the contribution of the Capability Approach to critical citizenship studies and to understanding the enablers of engaged citizenship


Paper by Anna Colom and Agnes Czajka: “The paper argues that the Capability Approach can make a significant contribution to understanding the enablers of engaged citizenship. Using insights from critical citizenship studies and original empirical research on young people’s civic and political involvement in western Kenya, we argue that it is useful to think of the process of engaged citizenship as comprised of two distinct yet interrelated parts: activation and performance. We suggest that the Capability Approach (CA) can help us understand what resources and processes are needed for people to not only become activated but to also effectively perform their citizenship. Although the CA is rarely brought into conversation with critical citizenship studies literatures, we argue that it can be useful in both operationalising the insights of critical citizenship studies on citizenship engagement and illustrating how activation and performance can be effectively supported or catalysed….(More)”

The Right to AI


Paper by Rashid Mushkani, Hugo Berard, Allison Cohen, Shin Koeski: “This paper proposes a Right to AI, which asserts that individuals and communities should meaningfully participate in the development and governance of the AI systems that shape their lives. Motivated by the increasing deployment of AI in critical domains and inspired by Henri Lefebvre’s concept of the Right to the City, we reconceptualize AI as a societal infrastructure, rather than merely a product of expert design. In this paper, we critically evaluate how generative agents, large-scale data extraction, and diverse cultural values bring new complexities to AI oversight. The paper proposes that grassroots participatory methodologies can mitigate biased outcomes and enhance social responsiveness. It asserts that data is socially produced and should be managed and owned collectively. Drawing on Sherry Arnstein’s Ladder of Citizen Participation and analyzing nine case studies, the paper develops a four-tier model for the Right to AI that situates the current paradigm and envisions an aspirational future. It proposes recommendations for inclusive data ownership, transparent design processes, and stakeholder-driven oversight. We also discuss market-led and state-centric alternatives and argue that participatory approaches offer a better balance between technical efficiency and democratic legitimacy…(More)”.

Societal and technological progress as sewing an ever-growing, ever-changing, patchy, and polychrome quilt


Paper by Joel Z. Leibo et al: “Artificial Intelligence (AI) systems are increasingly placed in positions where their decisions have real consequences, e.g., moderating online spaces, conducting research, and advising on policy. Ensuring they operate in a safe and ethically acceptable fashion is thus critical. However, most solutions have been a form of one-size-fits-all “alignment”. We are worried that such systems, which overlook enduring moral diversity, will spark resistance, erode trust, and destabilize our institutions. This paper traces the underlying problem to an often-unstated Axiom of Rational Convergence: the idea that under ideal conditions, rational agents will converge in the limit of conversation on a single ethics. Treating that premise as both optional and doubtful, we propose what we call the appropriateness framework: an alternative approach grounded in conflict theory, cultural evolution, multi-agent systems, and institutional economics. The appropriateness framework treats persistent disagreement as the normal case and designs for it by applying four principles: (1) contextual grounding, (2) community customization, (3) continual adaptation, and (4) polycentric governance. We argue here that adopting these design principles is a good way to shift the main alignment metaphor from moral unification to a more productive metaphor of conflict management, and that taking this step is both desirable and urgent…(More)”.

Co-Designing AI Systems with Value-Sensitive Citizen Science


Paper by Sachit Mahajan and Dirk Helbing: “As artificial intelligence (AI) systems increasingly shape everyday life, integrating diverse community values into their development becomes both an ethical imperative and a practical necessity. This paper introduces Value Sensitive Citizen Science (VSCS), a systematic framework combining Value Sensitive Design (VSD) principles with citizen science methods to foster meaningful public participation in AI. Addressing critical gaps in existing approaches, VSCS integrates culturally grounded participatory methods and structured cognitive scaffolding through the Participatory Value-Cognition Taxonomy (PVCT). Through iterative value-sensitive participation cycles guided by an extended scenario logic (What-if, If-then, Then-what, What-now), community members act as genuine coresearchers-identifying, translating, and operationalizing local values into concrete technical requirements. The framework also institutionalizes governance structures for ongoing oversight, adaptability, and accountability across the AI lifecycle. By explicitly bridging participatory design with algorithmic accountability, VSCS ensures that AI systems reflect evolving community priorities rather than reinforcing top-down or monocultural perspectives. Critical discussions highlight VSCS’s practical implications, addressing challenges such as power dynamics, scalability, and epistemic justice. The paper concludes by outlining actionable strategies for policymakers and practitioners, alongside future research directions aimed at advancing participatory, value-driven AI development across diverse technical and sociocultural contexts…(More)”.


The RRI Citizen Review Panel: a public engagement method for supporting responsible territorial policymaking


Paper by Maya Vestergaard Bidstrup et al: “Responsible Territorial Policymaking incorporates the main principles of Responsible Research and Innovation (RRI) into the policymaking process, making it well-suited for guiding the development of sustainable and resilient territorial policies that prioritise societal needs. As a cornerstone in RRI, public engagement plays a central role in this process, underscoring the importance of involving all societal actors to align outcomes with the needs, expectations, and values of society. In the absence of existing methods to gather sufficiently and effectively the citizens’ review of multiple policies at a territorial level, the RRI Citizen Review Panel is a new public engagement method developed to facilitate citizens’ review and validation of territorial policies. By using RRI as an analytical framework, this paper examines whether the RRI Citizen Review Panel can support Responsible Territorial Policymaking, not only by incorporating citizens’ perspectives into territorial policymaking, but also by making policies more responsible. The paper demonstrates that in the review of territorial policies, citizens are adding elements of RRI to a wide range of policies within different policy areas, contributing to making policies more responsible. Consequently, the RRI Citizen Review Panel emerges as a valuable tool for policymakers, enabling them to gather citizen perspectives and imbue policies with a heightened sense of responsibility…(More)”.

Playing for science: Designing science games


Paper by Claudio M Radaelli: “How can science have more impact on policy decisions? The P-Cube Project has approached this question by creating five pedagogical computer games based on missions given to a policy entrepreneur (the player) advocating for science-informed policy decisions. The player explores simplified strategies for policy change rooted in a small number of variables, thus making it possible to learn without a prior background in political science or public administration. The games evolved from the intuition that, instead of making additional efforts to explain science to decision-makers, we should directly empower would-be scientists (our primary audience for the games), post-graduates in public policy and administration, and activists for science. The two design principles of the games revolve around learning about how policy decisions are made (a learning-about-content principle) and reflection. Indeed, the presence of science in the policy process raises ethical and normative decisions, especially when we consider controversial strategies like civil disobedience and alliances with industry. To be on the side of science does not mean to be outside society and politics. I show the motivation, principles, scripts and pilots of the science games, reflecting on how they can be used and for what reasons…(More)”

Updating purpose limitation for AI: a normative approach from law and philosophy 


Paper by Rainer Mühlhoff and Hannah Ruschemeier: “The purpose limitation principle goes beyond the protection of the individual data subjects: it aims to ensure transparency, fairness and its exception for privileged purposes. However, in the current reality of powerful AI models, purpose limitation is often impossible to enforce and is thus structurally undermined. This paper addresses a critical regulatory gap in EU digital legislation: the risk of secondary use of trained models and anonymised training datasets. Anonymised training data, as well as AI models trained from this data, pose the threat of being freely reused in potentially harmful contexts such as insurance risk scoring and automated job applicant screening. We propose shifting the focus of purpose limitation from data processing to AI model regulation. This approach mandates that those training AI models define the intended purpose and restrict the use of the model solely to this stated purpose…(More)”.

Rebooting the global consensus: Norm entrepreneurship, data governance and the inalienability of digital bodies


Paper by Siddharth Peter de Souza and Linnet Taylor: “The establishment of norms among states is a common way of governing international actions. This article analyses the potential of norm-building for governing data and artificial intelligence technologies’ collective effects. Rather than focusing on state actors’s ability to establish and enforce norms, however, we identify a contrasting process taking place among civil society organisations in response to the international neoliberal consensus on the commodification of data. The norm we identify – ‘nothing about us without us’ – asserts civil society’s agency, and specifically the right of those represented in datasets to give or refuse permission through structures of democratic representation. We argue that this represents a form of norm-building that should be taken as seriously as that of states, and analyse how it is constructing the political power, relations, and resources to engage in governing technology at scale. We first outline how this counter-norming is anchored in data’s connections to bodies, land, community, and labour. We explore the history of formal international norm-making and the current norm-making work being done by civil society organisations internationally, and argue that these, although very different in their configurations and strategies, are comparable in scale and scope. Based on this, we make two assertions: first, that a norm-making lens is a useful way for both civil society and research to frame challenges to the primacy of market logics in law and governance, and second, that the conceptual exclusion of civil society actors as norm-makers is an obstacle to the recognition of counter-power in those spheres…(More)”.

Mapping local knowledge supports science and stewardship


Paper by Sarah C. Risley, Melissa L. Britsch, Joshua S. Stoll & Heather M. Leslie: “Coastal marine social–ecological systems are experiencing rapid change. Yet, many coastal communities are challenged by incomplete data to inform collaborative research and stewardship. We investigated the role of participatory mapping of local knowledge in addressing these challenges. We used participatory mapping and semi-structured interviews to document local knowledge in two focal social–ecological systems in Maine, USA. By co-producing fine-scale characterizations of coastal marine social–ecological systems, highlighting local questions and needs, and generating locally relevant hypotheses on system change, our research demonstrates how participatory mapping and local knowledge can enhance decision-making capacity in collaborative research and stewardship. The results of this study directly informed a collaborative research project to document changes in multiple shellfish species, shellfish predators, and shellfish harvester behavior and other human activities. This research demonstrates that local knowledge can be a keystone component of collaborative social–ecological systems research and community-lead environmental stewardship…(More)”.