Contractual Freedom and Fairness in EU Data Sharing Agreements


Paper by Thomas Margoni and Alain M. Strowel: “This chapter analyzes the evolving landscape of EU data-sharing agreements, particularly focusing on the balance between contractual freedom and fairness in the context of non-personal data. The discussion highlights the complexities introduced by recent EU legislation, such as the Data Act, Data Governance Act, and Open Data Directive, which collectively aim to regulate data markets and enhance data sharing. The chapter emphasizes how these laws impose obligations that limit contractual freedom to ensure fairness, particularly in business-to-business (B2B) and Internet of Things (IoT) data transactions. It also explores the tension between private ordering and public governance, suggesting that the EU’s approach marks a shift from property-based models to governance-based models in data regulation. This chapter underscores the significant impact these regulations will have on data contracts and the broader EU data economy…(More)”.

AI can help humans find common ground in democratic deliberation


Paper by Michael Henry Tessler et al: “We asked whether an AI system based on large language models (LLMs) could successfully capture the underlying shared perspectives of a group of human discussants by writing a “group statement” that the discussants would collectively endorse. Inspired by Jürgen Habermas’s theory of communicative action, we designed the “Habermas Machine” to iteratively generate group statements that were based on the personal opinions and critiques from individual users, with the goal of maximizing group approval ratings. Through successive rounds of human data collection, we used supervised fine-tuning and reward modeling to progressively enhance the Habermas Machine’s ability to capture shared perspectives. To evaluate the efficacy of AI-mediated deliberation, we conducted a series of experiments with over 5000 participants from the United Kingdom. These experiments investigated the impact of AI mediation on finding common ground, how the views of discussants changed across the process, the balance between minority and majority perspectives in group statements, and potential biases present in those statements. Lastly, we used the Habermas Machine for a virtual citizens’ assembly, assessing its ability to support deliberation on controversial issues within a demographically representative sample of UK residents…(More)”.

A shared destiny for public sector data


Blog post by Shona Nicol: “As a data professional, it can sometime feel hard to get others interested in data. Perhaps like many in this profession, I can often express the importance and value of data for good in an overly technical way. However when our biggest challenges in Scotland include eradicating child poverty, growing the economy and tackling the climate emergency, I would argue that we should all take an interest in data because it’s going to be foundational in helping us solve these problems.

Data is already intrinsic to shaping our society and how services are delivered. And public sector data is a vital component in making sure that services for the people of Scotland are being delivered efficiently and effectively. Despite an ever growing awareness of the transformative power of data to improve the design and delivery of services, feedback from public sector staff shows that they can face difficulties when trying to influence colleagues and senior leaders around the need to invest in data.

A vision gap

In the Scottish Government’s data maturity programme and more widely, we regularly hear about the challenges data professionals encounter when trying to enact change. This community tell us that a long-term vision for public sector data for Scotland could help them by providing the context for what they are trying to achieve locally.

Earlier this year we started to scope how we might do this. We recognised that organisations are already working to deliver local and national strategies and policies that relate to data, so any vision had to be able to sit alongside those, be meaningful in different settings, agnostic of technology and relevant to any public sector organisation. We wanted to offer opportunities for alignment, not enforce an instruction manual…(More)”.

Emerging technologies in the humanitarian sector


Report and project by Rand: “Emerging technologies have often been explored in the humanitarian sector through small scale pilot projects, testing their application in a specific context with limited opportunities to replicate the testing across various contexts. The level of familiarity and knowledge of technological development varies across the specific types of humanitarian activities undertaken and technology areas considered.

The study team identified five promising technology areas for the humanitarian sector that could be further explored out to 2030:

  • Advanced manufacturing systems are likely to offer humanitarians opportunities to produce resources and tools in an operating environment characterised by scarcity, the rise of simultaneous crises, and exposure to more intense and severe climate events.
  • Early Warning Systems are likely to support preparedness and response efforts across the humanitarian sector while multifactorial crises are likely to arise.
  • Camp monitoring systems are likely to support efforts not only to address security risks, but also support planning and management activities of sites or the health and wellbeing of displaced populations.
  • Coordination platforms are likely to enhance data collection and information-sharing across various humanitarian stakeholders for the development of timely and bespoke crisis response.
  • Privacy-enhancing technologies (PETs) can support ongoing efforts to comply with increased data privacy and data protection requirements in a humanitarian operating environment in which data collection will remain necessary.

Beyond these five technology areas, the study team also considered three innovation journey opportunities:

  • The establishment of a technology horizon scanning coalition
  • Visioning for emerging technologies in crisis recovery
  • An emerging technology narrative initiative.

To accompany the deployment of specific technologies in the humanitarian sector, the study team also developed a four-step approach aimed to identify specific guidance needs for end-users and humanitarian practitioners…(More)”.

External Researcher Access to Closed Foundation Models


Report by Esme Harrington and Dr. Mathias Vermeulen: “…addresses a pressing issue: independent researchers need better conditions for accessing and studying the AI models that big companies have developed. Foundation models — the core technology behind many AI applications — are controlled mainly by a few major players who decide who can study or use them.

What’s the problem with access?

  • Limited access: Companies like OpenAI, Google and others are the gatekeepers. They often restrict access to researchers whose work aligns with their priorities, which means independent, public-interest research can be left out in the cold.
  • High-end costs: Even when access is granted, it often comes with a hefty price tag that smaller or less-funded teams can’t afford.
  • Lack of transparency: These companies don’t always share how their models are updated or moderated, making it nearly impossible for researchers to replicate studies or fully understand the technology.
  • Legal risks: When researchers try to scrutinize these models, they sometimes face legal threats if their work uncovers flaws or vulnerabilities in the AI systems.

The research suggests that companies need to offer more affordable and transparent access to improve AI research. Additionally, governments should provide legal protections for researchers, especially when they are acting in the public interest by investigating potential risks…(More)”.

It is about time! Exploring the clashing timeframes of politics and public policy experiments


Paper by Ringa Raudla, Külli Sarapuu, Johanna Vallistu, and Nastassia Harbuzova: “Although existing studies on experimental policymaking have acknowledged the importance of the political setting in which policy experiments take place, we lack systematic knowledge on how various political dimensions affect experimental policymaking. In this article, we address a specific gap in the existing understanding of the politics of experimentation: how political timeframes influence experimental policymaking. Drawing on theoretical discussions on experimental policymaking, public policy, electoral politics, and mediatization of politics, we outline expectations about how electoral and problem cycles may influence the timing, design, and learning from policy experiments. We argue electoral timeframes are likely to discourage politicians from undertaking large-scale policy experiments and if politicians decide to launch experiments, they prefer shorter designs. The electoral cycle may lead politicians to draw too hasty conclusions or ignore the experiment’s results altogether. We expect problem cycles to shorten politicians’ time horizons further as there is pressure to solve problems quickly. We probe the plausibility of our theoretical expectations using interview data from two different country contexts: Estonia and Finland…(More)“.

Behavioural science: could supermarket loyalty cards nudge us to make healthier choices?


Article by Magda Osman: “Ken Murphy, CEO of the British multinational supermarket chain Tesco, recently said at a conference that Tesco “could use Clubcard data to nudge customers towards healthier choices”.

So how would this work, and do we want it? Our recent study, published in the Scientific Journal of Research and Reviews, provides an answer.

Loyalty schemes have been around as far back as the 1980s, with the introduction of airlines’ frequent flyer programmes.

Advancements in loyalty schemes have been huge, with some even using gamified approaches, such as leaderboards, trophies and treasure hunts, to keep us engaged. The loyalty principle relies on a form of social exchange, namely reciprocity.

The ongoing reciprocal relationship means that we use a good or service regularly because we trust the service provider, we are satisfied with the service, and we deem the rewards we get as reasonable – be they discounts, vouchers or gifts.

In exchange, we accept that, in many cases, loyalty schemes collect data on us. Our purchasing history, often tied to our demographics, generates improvements in the delivery of the service.

If we accept this, then we continue to benefit from reward schemes, such as promotional offers or other discounts. The effectiveness depends not only on making attractive offers to us for things we are interested in purchasing, but also other discounted items that we hadn’t considered buying…(More)”

Ensuring citizens’ assemblies land


Article by Graham Smith: “…the evidence shows that while the recommendations of assemblies are well considered and could help shape more robust policy, too often they fail to land. Why is this?

The simple answer is that so much time, resources and energy is spent on organising the assembly itself – ensuring the best possible experience for citizens – that the relationship with the local authority and its decision-making processes is neglected.

First, the question asked of the assembly does not always relate to a specific set of decisions about to be made by an authority. Is the relevant policy process open and ready for input? On a number of occasions assemblies have taken place just after a new policy or strategy has been agreed. Disastrous timing.

This does not mean assemblies should only be run when they are tied to a particular decision-making process. Sometimes it is important to open up a policy area with a broad question. And sometimes it makes sense to empower citizens to set the agenda and focus on the issues they find most compelling

The second element is the failure of authorities to prepare to receive recommendations from citizens.

One story is where the first a public official knew about an assembly was when its recommendations landed on their desk. They were not received in the best spirit.

Too often assemblies are commissioned by enthusiastic politicians and public officials who have not done the necessary work to ensure their colleagues are willing to give a considered response to the citizens’ recommendations. Too often an assembly will be organised by a department or ministry where the results require others in the authority to respond – but those other politicians and officials feel no connection to the process.

And too often, an assembly ends, and it is not clear who within the public authority has the responsibility to take the recommendations forward to ensure they are given a fair hearing across the authority.

For citizens’ assemblies to be effective requires political and administrative work well beyond just organising the assembly. If this is not done, it is not only a waste of resources, but it can do serious damage to democracy and trust as those citizens who have invested their time and energy into the process become disillusioned.

Those authorities where citizens’ assemblies have had meaningful impacts are those that have not only invested in the assembly, but also into preparing the authority to receive the recommendations. Often this has meant continuing support and resourcing for assembly members after the process. They are the best advocates for their work…(More)”


Asserting the public interest in health data: On the ethics of data governance for biobanks and insurers


Paper by Kathryne Metcalf and Jathan Sadowski : “Recent reporting has revealed that the UK Biobank (UKB)—a large, publicly-funded research database containing highly-sensitive health records of over half a million participants—has shared its data with private insurance companies seeking to develop actuarial AI systems for analyzing risk and predicting health. While news reports have characterized this as a significant breach of public trust, the UKB contends that insurance research is “in the public interest,” and that all research participants are adequately protected from the possibility of insurance discrimination via data de-identification. Here, we contest both of these claims. Insurers use population data to identify novel categories of risk, which become fodder in the production of black-boxed actuarial algorithms. The deployment of these algorithms, as we argue, has the potential to increase inequality in health and decrease access to insurance. Importantly, these types of harms are not limited just to UKB participants: instead, they are likely to proliferate unevenly across various populations within global insurance markets via practices of profiling and sorting based on the synthesis of multiple data sources, alongside advances in data analysis capabilities, over space/time. This necessitates a significantly expanded understanding of the publics who must be involved in biobank governance and data-sharing decisions involving insurers…(More)”.

Harnessing digital footprint data for population health: a discussion on collaboration, challenges and opportunities in the UK


Paper by Romana Burgess et al: “Digital footprint data are inspiring a new era in population health and well-being research. Linking these novel data with other datasets is critical for future research wishing to use these data for the public good. In order to succeed, successful collaboration among industry, academics and policy-makers is vital. Therefore, we discuss the benefits and obstacles for these stakeholder groups in using digital footprint data for research in the UK. We advocate for policy-makers’ inclusion in research efforts, stress the exceptional potential of digital footprint research to impact policy-making and explore the role of industry as data providers, with a focus on shared value, commercial sensitivity, resource requirements and streamlined processes. We underscore the importance of multidisciplinary approaches, consumer trust and ethical considerations in navigating methodological challenges and further call for increased public engagement to enhance societal acceptability. Finally, we discuss how to overcome methodological challenges, such as reproducibility and sharing of learnings, in future collaborations. By adopting a multiperspective approach to outlining the challenges of working with digital footprint data, our contribution helps to ensure that future research can navigate these challenges effectively while remaining reproducible, ethical and impactful…(More)”