Situating Digital Self-Determination (DSD): A Comparison with Existing and Emerging Digital and Data Governance Approaches


Paper by Sara Marcucci and Stefaan Verhulst: “In today’s increasingly complex digital landscape, traditional data governance models-such as consent-based, ownership-based, and sovereignty-based approaches-are proving insufficient to address the evolving ethical, social, and political dimensions of data use. These frameworks, often grounded in static and individualistic notions of control, struggle to keep pace with the fluidity and relational nature of contemporary data ecosystems. This paper proposes Digital Self-Determination (DSD) as a complementary and necessary evolution of existing models, offering a more participatory, adaptive, and ethically grounded approach to data governance. Centering ongoing agency, collective participation, and contextual responsiveness, DSD builds on foundational principles of consent and control while addressing their limitations. Drawing on comparisons with a range of governance models-including risk-based, compliance-oriented, principles-driven, and justice-centered frameworks-this paper highlights DSD’s unique contribution: its capacity to enable individuals and communities to actively shape how data about them is used, shared, and governed over time. In doing so, it reimagines data governance as a living, co-constructed practice grounded in trust, accountability, and care. Through this lens, the paper offers a framework for comparing different governance approaches and embedding DSD into existing paradigms, inviting policymakers and practitioners to consider how more inclusive and responsive forms of digital governance might be realized…(More)”.

AI Liability Along the Value Chain


Report by Beatriz Botero Arcila: “…explores how liability law can help solve the “problem of many hands” in AI: that is, determining who is responsible for harm that has been dealt in a value chain in which a variety of different companies and actors might be contributing to the development of any given AI system. This is aggravated by the fact that AI systems are both opaque and technically complex, making their behavior hard to predict.

Why AI Liability Matters

To find meaningful solutions to this problem, different kinds of experts have to come together. This resource is designed for a wide audience, but we indicate how specific audiences can best make use of different sections, overviews, and case studies.

Specifically, the report:

  • Proposes a 3-step analysis to consider how liability should be allocated along the value chain: 1) The choice of liability regime, 2) how liability should be shared amongst actors along the value chain and 3) whether and how information asymmetries will be addressed.
  • Argues that where ex-ante AI regulation is already in place, policymakers should consider how liability rules will interact with these rules.
  • Proposes a baseline liability regime where actors along the AI value chain share responsibility if fault can be demonstrated, paired with measures to alleviate or shift the burden of proof and to enable better access to evidence — which would incentivize companies to act with sufficient care and address information asymmetries between claimants and companies.
  • Argues that in some cases, courts and regulators should extend a stricter regime, such as product liability or strict liability.
  • Analyzes liability rules in the EU based on this framework…(More)”.

Digital Technologies and Participatory Governance in Local Settings: Comparing Digital Civic Engagement Initiatives During the COVID-19 Outbreak


Chapter by Nathalie Colasanti, Chiara Fantauzzi, Rocco Frondizi & Noemi Rossi: “Governance paradigms have undergone a deep transformation during the COVID-19 pandemic, necessitating agile, inclusive, and responsive mechanisms to address evolving challenges. Participatory governance has emerged as a guiding principle, emphasizing inclusive decision-making processes and collaboration among diverse stakeholders. In the outbreak context, digital technologies have played a crucial role in enabling participatory governance to flourish, democratizing participation, and facilitating the rapid dissemination of accurate information. These technologies have also empowered grassroots initiatives, such as civic hacking, to address societal challenges and mobilize communities for collective action. This study delves into the realm of bottom-up participatory initiatives at the local level, focusing on two emblematic cases of civic hacking experiences launched during the pandemic, the first in Wuhan, China, and the second in Italy. Through a comparative lens, drawing upon secondary sources, the aim is to analyze the dynamics, efficacy, and implications of these initiatives, shedding light on the evolving landscape of participatory governance in times of crisis. Findings underline the transformative potential of civic hacking and participatory governance in crisis response, highlighting the importance of collaboration, transparency, and inclusivity…(More)”.

Beyond data egoism: let’s embrace data altruism


Blog by Frank Hamerlinck: “When it comes to data sharing, there’s often a gap between ambition and reality. Many organizations recognize the potential of data collaboration, yet when it comes down to sharing their own data, hesitation kicks in. The concern? Costs, risks, and unclear returns. At the same time, there’s strong enthusiasm for accessing data.

This is the paradox we need to break. Because if data egoism rules, real innovation is out of reach, making the need for data altruism more urgent than ever.

…More and more leaders recognize that unlocking data is essential to staying competitive on a global scale, and they understand that we must do so while upholding our European values. However, the real challenge lies in translating this growing willingness into concrete action. Many acknowledge its importance in principle, but few are ready to take the first step. And that’s a challenge we need to address – not just as organizations but as a society…

To break down barriers and accelerate data-driven innovation, we’re launching the FTI Data Catalog – a step toward making data sharing easier, more transparent, and more impactful.

The catalog provides a structured, accessible overview of available datasets, from location data and financial data to well-being data. It allows organizations to discover, understand, and responsibly leverage data with ease. Whether you’re looking for insights to fuel innovation, enhance decision-making, drive new partnerships or unlock new value from your own data, the catalog is built to support open and secure data exchange.

Feeling curious? Explore the catalog

By making data more accessible, we’re laying the foundation for a culture of collaboration. The road to data altruism is long, but it’s one worth walking. The future belongs to those who dare to share!..(More)”

Paths to Social Licence for Tracking-data Analytics


Paper by Joshua P. White: “While tracking-data analytics can be a goldmine for institutions and companies, the inherent privacy concerns also form a legal, ethical and social minefield. We present a study that seeks to understand the extent and circumstances under which tracking-data analytics is undertaken with social licence — that is, with broad community acceptance beyond formal compliance with legal requirements. Taking a University campus environment as a case, we enquire about the social licence for Wi-Fi-based tracking-data analytics. Staff and student participants answered a questionnaire presenting hypothetical scenarios involving Wi-Fi tracking for university research and services. Our results present a Bayesian logistic mixed-effects regression of acceptability judgements as a function of participant ratings on 11 privacy dimensions. Results show widespread acceptance of tracking-data analytics on campus and suggest that trust, individual benefit, data sensitivity, risk of harm and institutional respect for privacy are the most predictive factors determining this acceptance judgement…(More)”.

The Measure of Progress: Counting What Really Matters


Book by Diane Coyle: “The ways that statisticians and governments measure the economy were developed in the 1940s, when the urgent economic problems were entirely different from those of today. In The Measure of Progress, Diane Coyle argues that the framework underpinning today’s economic statistics is so outdated that it functions as a distorting lens, or even a set of blinkers. When policymakers rely on such an antiquated conceptual tool, how can they measure, understand, and respond with any precision to what is happening in today’s digital economy? Coyle makes the case for a new framework, one that takes into consideration current economic realities.

Coyle explains why economic statistics matter. They are essential for guiding better economic policies; they involve questions of freedom, justice, life, and death. Governments use statistics that affect people’s lives in ways large and small. The metrics for economic growth were developed when a lack of physical rather than natural capital was the binding constraint on growth, intangible value was less important, and the pressing economic policy challenge was managing demand rather than supply. Today’s challenges are different. Growth in living standards in rich economies has slowed, despite remarkable innovation, particularly in digital technologies. As a result, politics is contentious and democracy strained.

Coyle argues that to understand the current economy, we need different data collected in a different framework of categories and definitions, and she offers some suggestions about what this would entail. Only with a new approach to measurement will we be able to achieve the right kind of growth for the benefit of all…(More)”.

Data Collaborations How-to-Guide


Resource by The Data for Children Collaborative: “… excited to share a consolidation of 5 years of learning in one handy how-to-guide. Our ethos is to openly share tools, approaches and frameworks that may benefit others working in the Data for Good space. We have developed this guide specifically to support organisations working on complex challenges that may have data-driven solutions. The How-to-Guide provides advice and examples of how to plan and execute collaboration on a data project effectively…  

Data collaboration can provide organisations with high-quality, evidence-based insights that drive policy and practice while bringing together diverse perspectives to solve problems. It also fosters innovation, builds networks for future collaboration, and ensures effective implementation of solutions on the ground…(More)”.

Harnessing Mission Governance to Achieve National Climate Targets


OECD Report: “To achieve ambitious climate targets under the Paris Agreement, countries need more than political will – they need effective governance. This report examines how a mission-oriented approach can transform climate action. Analysing 15 countries’ climate council assessments, the report reveals that, while many nations are incorporating elements of mission governance, significant gaps remain. It highlights promising examples of whole-of-government approaches, while identifying key challenges, such as limited societal engagement, weak co-ordination, and a lack of focus on experimentation and ecosystem mobilisation. The report argues that national climate commitments effectively function as overarching missions, and thus, can greatly benefit from applying mission governance principles. It recommends integrating missions into climate mitigation efforts, applying these principles to policy design and implementation, and deploying targeted missions to address specific climate challenges. By embracing a holistic, mission-driven strategy, countries can enhance their climate action and achieve their ambitious targets…(More)”.

How crawlers impact the operations of the Wikimedia projects


Article by the Wikimedia Foundation: “Since the beginning of 2024, the demand for the content created by the Wikimedia volunteer community – especially for the 144 million images, videos, and other files on Wikimedia Commons – has grown significantly. In this post, we’ll discuss the reasons for this trend and its impact.

The Wikimedia projects are the largest collection of open knowledge in the world. Our sites are an invaluable destination for humans searching for information, and for all kinds of businesses that access our content automatically as a core input to their products. Most notably, the content has been a critical component of search engine results, which in turn has brought users back to our sites. But with the rise of AI, the dynamic is changing: We are observing a significant increase in request volume, with most of this traffic being driven by scraping bots collecting training data for large language models (LLMs) and other use cases. Automated requests for our content have grown exponentially, alongside the broader technology economy, via mechanisms including scraping, APIs, and bulk downloads. This expansion happened largely without sufficient attribution, which is key to drive new users to participate in the movement, and is causing a significant load on the underlying infrastructure that keeps our sites available for everyone. 

When Jimmy Carter died in December 2024, his page on English Wikipedia saw more than 2.8 million views over the course of a day. This was relatively high, but manageable. At the same time, quite a few users played a 1.5 hour long video of Carter’s 1980 presidential debate with Ronald Reagan. This caused a surge in the network traffic, doubling its normal rate. As a consequence, for about one hour a small number of Wikimedia’s connections to the Internet filled up entirely, causing slow page load times for some users. The sudden traffic surge alerted our Site Reliability team, who were swiftly able to address this by changing the paths our internet connections go through to reduce the congestion. But still, this should not have caused any issues, as the Foundation is well equipped to handle high traffic spikes during exceptional events. So what happened?…

Since January 2024, we have seen the bandwidth used for downloading multimedia content grow by 50%. This increase is not coming from human readers, but largely from automated programs that scrape the Wikimedia Commons image catalog of openly licensed images to feed images to AI models. Our infrastructure is built to sustain sudden traffic spikes from humans during high-interest events, but the amount of traffic generated by scraper bots is unprecedented and presents growing risks and costs…(More)”.

Prosocial Media


Paper by Glen Weyl et al: “Social media empower distributed content creation by algorithmically harnessing “the social fabric” (explicit and implicit signals of association) to serve this content. While this overcomes the bottlenecks and biases of traditional gatekeepers, many believe it has unsustainably eroded the very social fabric it depends on by maximizing engagement for advertising revenue. This paper participates in open and ongoing considerations to translate social and political values and conventions, specifically social cohesion, into platform design. We propose an alternative platform model that includes the social fabric an explicit output as well as input. Citizens are members of communities defined by explicit affiliation or clusters of shared attitudes. Both have internal divisions, as citizens are members of intersecting communities, which are themselves internally diverse. Each is understood to value content that bridge (viz. achieve consensus across) and balance (viz. represent fairly) this internal diversity, consistent with the principles of the Hutchins Commission (1947). Content is labeled with social provenance, indicating for which community or citizen it is bridging or balancing. Subscription payments allow citizens and communities to increase the algorithmic weight on the content they value in the content serving algorithm. Advertisers may, with consent of citizen or community counterparties, target them in exchange for payment or increase in that party’s algorithmic weight. Underserved and emerging communities and citizens are optimally subsidized/supported to develop into paying participants. Content creators and communities that curate content are rewarded for their contributions with algorithmic weight and/or revenue. We discuss applications to productivity (e.g. LinkedIn), political (e.g. X), and cultural (e.g. TikTok) platforms…(More)”.