Towards a Holistic EU Data Governance


SITRA Publication: “The European Union’s ambitious data strategy aims to establish the EU as a leader in a data-driven society by creating a single market for data while fully respecting European policies on privacy, data protection, and competition law. To achieve the strategy’s bold aims, Europe needs more practical business cases where data flows across the organisations.

Reliable data sharing requires new technical, governance and business solutions. Data spaces address these needs by providing soft infrastructure to enable trusted and easy data flows across organisational boundaries.

Striking the right balance between regulation and innovation will be critical to creating a supportive environment for data-sharing business cases to flourish. In this working paper, we take an in-depth look at the governance issues surrounding data sharing and data spaces.

Data sharing requires trust. Trust can be facilitated by effective governance, meaning the rules for data sharing. These rules come from different arenas. The European Commission is establishing new regulations related to data, and member states also have their laws and authorities that oversee data-sharing activities. Ultimately, data spaces need local rules to enable interoperability and foster trust between participants. The governance framework for data spaces is called a rulebook, which codifies legal, business, technical, and ethical rules for data sharing.

The extensive discussions and interviews with experts reveal confusion in the field. People developing data sharing in practice or otherwise involved in data governance issues struggle to know who does what and who decides what. Data spaces also struggle to create internal governance structures in line with the regulatory environment. The interviews conducted for this study indicate that coordination at the member state level could play a decisive role in coordinating the EU-level strategy with concrete local data space initiatives.

The root cause of many of the pain points we identify is the problem of gaps, duplication and overlapping of roles between the different actors at all levels. To address these challenges and cultivate effective governance, a holistic data governance framework is proposed. This framework combines the existing approach of rulebooks with a new tool called the rolebook, which serves as a register of roles and bodies involved in data sharing. The rolebook aims to increase clarity and empower stakeholders at all levels to understand the current data governance structures.

In conclusion, effective governance is crucial for the success of the EU data strategy and the development of data spaces. By implementing the proposed holistic data governance framework, the EU can promote trust, balanced regulation and innovation, and support the growth of data spaces across sectors…(More)”.

The emergence of non-personal data markets


Report by the Think Tank of the European Parliament: “The European Commission’s Data Strategy aims to create a single market for data, open to data from across the world, where personal and non-personal data, including sensitive business data, are secure. The EU Regulation on the free flow of non-personal data allows non-personal data to be stored and processed anywhere in the EU without unjustified restrictions, with limited exceptions based on grounds of public security. The creation of multiple common sector-specific European data spaces aims to ensure Europe’s global competitiveness and data sovereignty. The Data Act proposed by the Commission aims to remove barriers to data access for both consumers and businesses and to establish common rules to govern the sharing of data generated using connected products or related services.

The aim of the study is to provide an in-depth, comprehensive, and issue-specific analysis of the emergence of non-personal data markets in Europe. The study seeks to identify the potential value of the non-personal data market, potential challenges and solutions, and the legislative/policy measures necessary to facilitate the further development of non-personal data markets. The study also ranks the main non-personal data markets by size and growth rate and provides a sector-specific analysis for the mobility and transport, energy, and manufacturing sectors…(More)”.

Generative AI, Jobs, and Policy Response


Paper by the Global Partnership on AI: “Generative AI and the Future of Work remains notably absent from the global AI governance dialogue. Given the transformative potential of this technology in the workplace, this oversight suggests a significant gap, especially considering the substantial implications this technology has for workers, economies and society at large. As interest grows in the effects of Generative AI on occupations, debates centre around roles being replaced or enhanced by technology. Yet there is an incognita, the “Big Unknown”, an important number of workers whose future depends on decisions yet to be made
In this brief, recent articles about the topic are surveyed with special attention to the “Big Unknown”. It is not a marginal number: nearly 9% of the workforce, or 281 million workers worldwide, are in this category. Unlike previous AI developments which focused on automating narrow tasks, Generative AI models possess the scope, versatility, and economic viability to impact jobs across multiple industries and at varying skill levels. Their ability to produce human-like outputs in areas like language, content creation and customer interaction, combined with rapid advancement and low deployment costs, suggest potential near-term impacts that are much broader and more abrupt than prior waves of AI. Governments, companies, and social partners should aim to minimize any potential negative effects from Generative AI technology in the world of work, as well as harness potential opportunities to support productivity growth and decent work. This brief presents concrete policy recommendations at the global and local level. These insights, are aimed to guide the discourse towards a balanced and fair integration of Generative AI in our professional landscape To navigate this uncertain landscape and ensure that the benefits of Generative AI are equitably distributed, we recommend 10 policy actions that could serve as a starting point for discussion and implementation…(More)”.

Technology Foresight for Public Funding of Innovation: Methods and Best Practices


JRC Paper: “In times of growing uncertainties and complexities, anticipatory thinking is essential for policymakers. Technology foresight explores the longer-term futures of Science, Technology and Innovation. It can be used as a tool to create effective policy responses, including in technology and innovation policies, and to shape technological change. In this report we present six anticipatory and technology foresight methods that can contribute to anticipatory intelligence in terms of public funding of innovation: the Delphi survey, genius forecasting, technology roadmapping, large language models used in foresight, horizon scanning and scenario planning. Each chapter provides a brief overview of the method with case studies and recommendations. The insights from this report show that only by combining different anticipatory viewpoints and approaches to spotting, understanding and shaping emergent technologies, can public funders such as the European Innovation Council improve their proactive approaches to supporting ground-breaking technologies. In this way, they will help innovation ecosystems to develop…(More)”.

Open: A Pan-ideological Panacea, a Free Floating Signifier


Paper by Andrea Liu: “Open” is a word that originated from FOSS (Free and Open Software movement) to mean a Commons-based, non-proprietary form of computer software development (Linux, Apache) based on a decentralized, poly-hierarchical, distributed labor model. But the word “open” has now acquired an unnerving over-elasticity, a word that means so many things that at times it appears meaningless. This essay is a rhetorical analysis (if not a deconstruction) of how the term “open” functions in digital culture, the promiscuity (if not gratuitousness) with which the term “open” is utilized in the wider society, and the sometimes blatantly contradictory ideologies a indiscriminately lumped together under this word…(More)”

Data Sandboxes: Managing the Open Data Spectrum


Primer by Uma Kalkar, Sampriti Saxena, and Stefaan Verhulst: “Opening up data offers opportunities to enhance governance, elevate public and private services, empower individuals, and bolster public well-being. However, achieving the delicate balance between open data access and the responsible use of sensitive and valuable information presents complex challenges. Data sandboxes are an emerging approach to balancing these needs.

In this white paper, The GovLab seeks to answer the following questions surrounding data sandboxes: What are data sandboxes? How can data sandboxes empower decision-makers to unlock the potential of open data while maintaining the necessary safeguards for data privacy and security? Can data sandboxes help decision-makers overcome barriers to data access and promote purposeful, informed data (re-)use?

The six characteristics of a data sandbox. Image by The GovLab.

After evaluating a series of case studies, we identified the following key findings:

  • Data sandboxes present six unique characteristics that make them a strong tool for facilitating open data and data re-use. These six characteristics are: controlled, secure, multi-sectoral and collaborative, high computing environments, temporal in nature, adaptable, and scalable.
  • Data sandboxes can be used for: pre-engagement assessment, data mesh enablement, rapid prototyping, familiarization, quality and privacy assurance, experimentation and ideation, white labeling and minimization, and maturing data insights.
  • There are many benefits to implementing data sandboxes. We found ten value propositions, such as: decreasing risk in accessing more sensitive data; enhancing data capacity; and fostering greater experimentation and innovation, to name a few.
  • When looking to implement a data sandbox, decision-makers should consider how they will attract and obtain high-quality, relevant data, keep the data fresh for accurate re-use, manage risks of data (re-)use, and translate and scale up sandbox solutions in real markets.
  • Advances in the use of the Internet of Things and Privacy Enhancing Technologies could help improve the creation, preparation, analysis, and security of data in a data sandbox. The development of these technologies, in parallel with European legislative measures such as the Digital Markets Act, the Data Act and the Data Governance Act, can improve the way data is unlocked in a data sandbox, improving trust and encouraging data (re-)use initiatives…(More)” (FULL PRIMER)”

Seven routes to experimentation in policymaking: a guide to applied behavioural science methods


OECD Resource: “…offers guidelines and a visual roadmap to help policymakers choose the most fit-for-purpose evidence collection method for their specific policy challenge.

Source: Elaboration of the authors: Varazzani, C., Emmerling. T., Brusoni, S., Fontanesi, L., and Tuomaila, H., (2023), “Seven routes to experimentation: A guide to applied behavioural science methods,” OECD Working Papers on Public Governance, OECD Publishing, Paris. Note: The authors elaborated the map based on a previous map ideated, researched, and designed by Laura Castro Soto, Judith Wagner, and Torben Emmerling (sevenroutes.com).

The seven applied behavioural science methods:

  • Randomised Controlled Trials (RCTs) are experiments that can demonstrate a causal relationship between an intervention and an outcome, by randomly assigning individuals to an intervention group and a control group.
  • A/B testing tests two or more manipulations (such as variants of a webpage) to assess which performs better in terms of a specific goal or metric.
  • Difference-in-Difference is an experimental method that estimates the causal effect of an intervention by comparing changes in outcomes between an intervention group and a control group before and after the intervention.
  • Before-After studies assess the impact of an intervention or event by comparing outcomes or measurements before and after its occurrence, without a control group.
  • Longitudinal studies collect data from the same individuals or groups over an extended period to assess trends over time.
  • Correlational studies help to investigate the relationship between two or more variables to determine if they vary together (without implying causation).
  • Qualitative studies explore the underlying meanings and nuances of a phenomenon through interviews, focus group sessions, or other exploratory methods based on conversations and observations…(More)”.

Disaster preparedness: Will a “norm nudge” sink or swim?


Article by Jantsje Mol: “In these times of unprecedented climate change, one critical question persists: how do we motivate homeowners to protect their homes and loved ones from the ever-looming threat of flooding? This question led to a captivating behavioral science study, born from a research visit to the Wharton Risk Management and Decision Processes Center in 2019 (currently the Wharton Climate Center). Co-founded and co-directed by the late Howard Kunreuther, the Center has been at the forefront of understanding and mitigating the impact of natural disasters. In this study, we explored the potential of social norms to boost flood preparedness among homeowners. While the results may not align with initial expectations, they shed light on the complexities of human behavior, the significance of meticulous testing, and the enduring legacy of a visionary scholar.

The Power of Social Norms

Before we delve into the results, let’s take a moment to understand what social norms are and why they matter. Social norms dictate what is considered acceptable or expected in a given community. A popular behavioral intervention based on social norms is a norm-nudge: reading information about what others do (say, energy saving behavior of neighbors or tax compliance rates of fellow citizens) may adjust one’s own behavior closer. Norm-nudges are cheap, easy to implement and less prone to political resistance than traditional interventions such as taxes, but they might be ineffective or even backfire. Norm-nudges have been applied to health, finance and the environment, but not yet to the context of natural disaster risk-reduction…(More)”.

International Definitions of Artificial Intelligence


Report by IAPP: “Computer scientist John McCarthy coined the term artificial intelligence in 1955, defining it as “the science and engineering of making intelligent machines.” He organized the Dartmouth Summer Research Project on Artificial Intelligence a year later — an event that many consider the birthplace of the field.

In today’s world, the definition of AI has been in continuous evolution, its contours and constraints changing to align with current and perhaps future technological progress and cultural contexts. In fact, most papers and articles are quick to point out the lack of common consensus around the definition of AI. As a resource from British research organization the Ada Lovelace Institute states, “We recognise that the terminology in this area is contested. This is a fast-moving topic, and we expect that terminology will evolve quickly.” The difficulty in defining AI is illustrated by what AI historian Pamela McCorduck called the “odd paradox,” referring to the idea that, as computer scientists find new and innovative solutions, computational techniques once considered AI lose the title as they become common and repetitive.

The indeterminate nature of the term poses particular challenges in the regulatory space. Indeed, in 2017 a New York City Council task force downgraded its mission to regulate the city’s use of automated decision-making systems to just defining the types of systems subject to regulation because it could not agree on a workable, legal definition of AI.

With this understanding, the following chart provides a snapshot of some of the definitions of AI from various global and sectoral (government, civil society and industry) perspectives. The chart is not an exhaustive list. It allows for cross-contextual comparisons from key players in the AI ecosystem…(More)”

Governing the Digital Future


Report by the New America Foundation: “…The first part of this analysis was focused on five issue areas in digital technology that are driving conflict, human rights violations, and socioeconomic displacement: (1) AI and algorithmic decision-making, (2) digital access and divides, (3) data protection and data sovereignty, (4) digital identity and surveillance, and (5) transnational cybercrime...

From our dialogues, consultations, and analysis, a fundamental conclusion emerged: An over-concentration of power and severe power asymmetries are causing conflict, harm, and governance dysfunction in the digital domain. Whereas the internet began as a distributed enterprise that connected and empowered individuals worldwide, extreme concentrations of political, economic, and social power now characterize the digital domain. Power imbalances are especially acute between developing and wealthy nations, as a handful of rich-world tech companies and nation-states control the terms and trajectory of digitization…

On a more practical level, a few takeaways and first principles stood out as in need of urgent attention:

  1. We have a critical opportunity to get ahead of possible harms that will stem from AI; science and citizen-centric fora like the Pugwash Conferences on Science and Technology offer a model means of refocusing the digital governance ecosystem beyond the myopic logic of national sovereignty.
  2. Amid digital divides and increasing government control over the internet, multilateral and multi-stakeholder agencies should invest in fail-safes, alternative or redundant means of access, that can shift the stewardship of connectivity away from concentrated power centers.
  3. Regional standards that respect diverse local circumstances can help generate global cooperation on challenges such as cybercrime.
  4. To reduce global conflict in digital surveillance, democracies should practice what they preach and ban commercial spyware outright.
  5. Redistributing the value from big data can diminish corporate power and empower individuals…(More)”