Facilitating Data Flows through Data Collaboratives


A Practical Guide “to Designing Valuable, Accessible, and Responsible Data Collaboratives” by Uma Kalkar, Natalia González Alarcón, Arturo Muente Kunigami and Stefaan Verhulst: “Data is an indispensable asset in today’s society, but its production and sharing are subject to well-known market failures. Among these: neither economic nor academic markets efficiently reward costly data collection and quality assurance efforts; data providers cannot easily supervise the appropriate use of their data; and, correspondingly, users have weak incentives to pay for, acknowledge, and protect data that they receive from providers. Data collaboratives are a potential non-market solution to this problem, bringing together data providers and users to address these market failures. The governance frameworks for these collaboratives are varied and complex and their details are not widely known. This guide proposes a methodology and a set of common elements that facilitate experimentation and creation of collaborative environments. It offers guidance to governments on implementing effective data collaboratives as a means to promote data flows in Latin America and the Caribbean, harnessing their potential to design more effective services and improve public policies…(More)”.

Artificial Intelligence and the Labor Force


Report by by Tobias Sytsma, and Éder M. Sousa: “The rapid development of artificial intelligence (AI) has the potential to revolutionize the labor force with new generative AI tools that are projected to contribute trillions of dollars to the global economy by 2040. However, this opportunity comes with concerns about the impact of AI on workers and labor markets. As AI technology continues to evolve, there is a growing need for research to understand the technology’s implications for workers, firms, and markets. This report addresses this pressing need by exploring the relationship between occupational exposure and AI-related technologies, wages, and employment.

Using natural language processing (NLP) to identify semantic similarities between job task descriptions and U.S. technology patents awarded between 1976 and 2020, the authors evaluate occupational exposure to all technology patents in the United States, as well as to specific AI technologies, including machine learning, NLP, speech recognition, planning control, AI hardware, computer vision, and evolutionary computation.

The authors’ findings suggest that exposure to both general technology and AI technology patents is not uniform across occupational groups, over time, or across technology categories. They estimate that up to 15 percent of U.S. workers were highly exposed to AI technology patents by 2019 and find that the correlation between technology exposure and employment growth can depend on the routineness of the occupation. This report contributes to the growing literature on the labor market implications of AI and provides insights that can inform policy discussions around this emerging issue…(More)”

How Americans View Data Privacy


Pew Research: “…Americans – particularly Republicans – have grown more concerned about how the government uses their data. The share who say they are worried about government use of people’s data has increased from 64% in 2019 to 71% today. That reflects rising concern among Republicans (from 63% to 77%), while Democrats’ concern has held steady. (Each group includes those who lean toward the respective party.)

The public increasingly says they don’t understand what companies are doing with their data. Some 67% say they understand little to nothing about what companies are doing with their personal data, up from 59%.

Most believe they have little to no control over what companies or the government do with their data. While these shares have ticked down compared with 2019, vast majorities feel this way about data collected by companies (73%) and the government (79%).

We’ve studied Americans’ views on data privacy for years. The topic remains in the national spotlight today, and it’s particularly relevant given the policy debates ranging from regulating AI to protecting kids on social media. But these are far from abstract concepts. They play out in the day-to-day lives of Americans in the passwords they choose, the privacy policies they agree to and the tactics they take – or not – to secure their personal information. We surveyed 5,101 U.S. adults using Pew Research Center’s American Trends Panel to give voice to people’s views and experiences on these topics.

In addition to the key findings covered on this page, the three chapters of this report provide more detail on:

Towards a Considered Use of AI Technologies in Government 


Report by the Institute on Governance and Think Digital: “… undertook a case study-based research project, where 24 examples of AI technology projects and governance frameworks across a dozen jurisdictions were scanned. The purpose of this report is to provide policymakers and practitioners in government with an overview of controversial deployments of Artificial Intelligence (AI) technologies in the public sector, and to highlight some of the approaches being taken to govern the responsible use of these technologies in government. 

Two environmental scans make up the majority of the report. The first scan presents relevant use cases of public sector applications of AI technologies and automation, with special attention given to controversial projects and program/policy failures. The second scan surveys existing governance frameworks employed by international organizations and governments around the world. Each scan is then analyzed to determine common themes across use cases and governance frameworks respectively. The final section of the report provides risk considerations related to the use of AI by public sector institutions across use cases…(More)”.

FickleFormulas: The Political Economy of Macroeconomic Measurement


About: “Statistics about economic activities are critical to governance. Measurements of growth, unemployment and inflation rates, public debts – they all tell us ‘how our economies are doing’ and inform policy. Citizens punish politicians who fail to deliver on them.

FickleFormulas has integrated two research projects at the University of Amsterdam that ran from 2014 to 2020. Its researchers have studied the origins of the formulas behind these indicators: why do we measure our economies the way we do? After all, it is far from self-evident how to define and measure economic indicators. Our choices have deeply distributional consequences, producing winners and losers, and they shape our future, for example when GDP figures hide the cost of environmental destruction.

Criticisms of particular measures are hardly new. GDP in particular has been denounced as a deeply deficient measure of production at best and a fundamentally misleading guidepost for human development at worst. But also measures of inflation, balances of payments and trade, unemployment figures, productivity or public debt hide unsolved and maybe insoluble problems. In FickleFormulas we have asked: which social, political and economic factors shape the formulas used to calculate macroeconomic indicators?

In our quest for answers we have mobilized scholarship and expertise scattered across academic disciplines – a wealth of knowledge brought together for example here. We have reconstructed expert-deliberations of past decades, but mostly we wanted to learn from those who actually design macroeconomic indicators: statisticians at national statistical offices or organizations such as the OECD, the UN, the IMF, or the World Bank. For us, understanding macroeconomic indicators has been impossible without talking to the people who live and breathe them….(More)”.

Towards a Holistic EU Data Governance


SITRA Publication: “The European Union’s ambitious data strategy aims to establish the EU as a leader in a data-driven society by creating a single market for data while fully respecting European policies on privacy, data protection, and competition law. To achieve the strategy’s bold aims, Europe needs more practical business cases where data flows across the organisations.

Reliable data sharing requires new technical, governance and business solutions. Data spaces address these needs by providing soft infrastructure to enable trusted and easy data flows across organisational boundaries.

Striking the right balance between regulation and innovation will be critical to creating a supportive environment for data-sharing business cases to flourish. In this working paper, we take an in-depth look at the governance issues surrounding data sharing and data spaces.

Data sharing requires trust. Trust can be facilitated by effective governance, meaning the rules for data sharing. These rules come from different arenas. The European Commission is establishing new regulations related to data, and member states also have their laws and authorities that oversee data-sharing activities. Ultimately, data spaces need local rules to enable interoperability and foster trust between participants. The governance framework for data spaces is called a rulebook, which codifies legal, business, technical, and ethical rules for data sharing.

The extensive discussions and interviews with experts reveal confusion in the field. People developing data sharing in practice or otherwise involved in data governance issues struggle to know who does what and who decides what. Data spaces also struggle to create internal governance structures in line with the regulatory environment. The interviews conducted for this study indicate that coordination at the member state level could play a decisive role in coordinating the EU-level strategy with concrete local data space initiatives.

The root cause of many of the pain points we identify is the problem of gaps, duplication and overlapping of roles between the different actors at all levels. To address these challenges and cultivate effective governance, a holistic data governance framework is proposed. This framework combines the existing approach of rulebooks with a new tool called the rolebook, which serves as a register of roles and bodies involved in data sharing. The rolebook aims to increase clarity and empower stakeholders at all levels to understand the current data governance structures.

In conclusion, effective governance is crucial for the success of the EU data strategy and the development of data spaces. By implementing the proposed holistic data governance framework, the EU can promote trust, balanced regulation and innovation, and support the growth of data spaces across sectors…(More)”.

The emergence of non-personal data markets


Report by the Think Tank of the European Parliament: “The European Commission’s Data Strategy aims to create a single market for data, open to data from across the world, where personal and non-personal data, including sensitive business data, are secure. The EU Regulation on the free flow of non-personal data allows non-personal data to be stored and processed anywhere in the EU without unjustified restrictions, with limited exceptions based on grounds of public security. The creation of multiple common sector-specific European data spaces aims to ensure Europe’s global competitiveness and data sovereignty. The Data Act proposed by the Commission aims to remove barriers to data access for both consumers and businesses and to establish common rules to govern the sharing of data generated using connected products or related services.

The aim of the study is to provide an in-depth, comprehensive, and issue-specific analysis of the emergence of non-personal data markets in Europe. The study seeks to identify the potential value of the non-personal data market, potential challenges and solutions, and the legislative/policy measures necessary to facilitate the further development of non-personal data markets. The study also ranks the main non-personal data markets by size and growth rate and provides a sector-specific analysis for the mobility and transport, energy, and manufacturing sectors…(More)”.

Generative AI, Jobs, and Policy Response


Paper by the Global Partnership on AI: “Generative AI and the Future of Work remains notably absent from the global AI governance dialogue. Given the transformative potential of this technology in the workplace, this oversight suggests a significant gap, especially considering the substantial implications this technology has for workers, economies and society at large. As interest grows in the effects of Generative AI on occupations, debates centre around roles being replaced or enhanced by technology. Yet there is an incognita, the “Big Unknown”, an important number of workers whose future depends on decisions yet to be made
In this brief, recent articles about the topic are surveyed with special attention to the “Big Unknown”. It is not a marginal number: nearly 9% of the workforce, or 281 million workers worldwide, are in this category. Unlike previous AI developments which focused on automating narrow tasks, Generative AI models possess the scope, versatility, and economic viability to impact jobs across multiple industries and at varying skill levels. Their ability to produce human-like outputs in areas like language, content creation and customer interaction, combined with rapid advancement and low deployment costs, suggest potential near-term impacts that are much broader and more abrupt than prior waves of AI. Governments, companies, and social partners should aim to minimize any potential negative effects from Generative AI technology in the world of work, as well as harness potential opportunities to support productivity growth and decent work. This brief presents concrete policy recommendations at the global and local level. These insights, are aimed to guide the discourse towards a balanced and fair integration of Generative AI in our professional landscape To navigate this uncertain landscape and ensure that the benefits of Generative AI are equitably distributed, we recommend 10 policy actions that could serve as a starting point for discussion and implementation…(More)”.

Technology Foresight for Public Funding of Innovation: Methods and Best Practices


JRC Paper: “In times of growing uncertainties and complexities, anticipatory thinking is essential for policymakers. Technology foresight explores the longer-term futures of Science, Technology and Innovation. It can be used as a tool to create effective policy responses, including in technology and innovation policies, and to shape technological change. In this report we present six anticipatory and technology foresight methods that can contribute to anticipatory intelligence in terms of public funding of innovation: the Delphi survey, genius forecasting, technology roadmapping, large language models used in foresight, horizon scanning and scenario planning. Each chapter provides a brief overview of the method with case studies and recommendations. The insights from this report show that only by combining different anticipatory viewpoints and approaches to spotting, understanding and shaping emergent technologies, can public funders such as the European Innovation Council improve their proactive approaches to supporting ground-breaking technologies. In this way, they will help innovation ecosystems to develop…(More)”.

Open: A Pan-ideological Panacea, a Free Floating Signifier


Paper by Andrea Liu: “Open” is a word that originated from FOSS (Free and Open Software movement) to mean a Commons-based, non-proprietary form of computer software development (Linux, Apache) based on a decentralized, poly-hierarchical, distributed labor model. But the word “open” has now acquired an unnerving over-elasticity, a word that means so many things that at times it appears meaningless. This essay is a rhetorical analysis (if not a deconstruction) of how the term “open” functions in digital culture, the promiscuity (if not gratuitousness) with which the term “open” is utilized in the wider society, and the sometimes blatantly contradictory ideologies a indiscriminately lumped together under this word…(More)”