Mechanisms for Researcher Access to Online Platform Data


Status Report by the EU/USA: “Academic and civil society research on prominent online platforms has become a crucial way to understand the information environment and its impact on our societies. Scholars across the globe have leveraged application programming interfaces (APIs) and web crawlers to collect public user-generated content and advertising content on online platforms to study societal issues ranging from technology-facilitated gender-based violence, to the impact of media on mental health for children and youth. Yet, a changing landscape of platforms’ data access mechanisms and policies has created uncertainty and difficulty for critical research projects.


The United States and the European Union have a shared commitment to advance data access for researchers, in line with the high-level principles on access to data from online platforms for researchers announced at the EU-U.S. Trade and Technology Council (TTC) Ministerial Meeting in May 2023.1 Since the launch of the TTC, the EU Digital Services Act (DSA) has gone into effect, requiring providers of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to provide increased transparency into their services. The DSA includes provisions on transparency reports, terms and conditions, and explanations for content moderation decisions. Among those, two provisions provide important access to publicly available content on platforms:


• DSA Article 40.12 requires providers of VLOPs/VLOSEs to provide academic and civil society researchers with data that is “publicly accessible in their online interface.”
• DSA Article 39 requires providers of VLOPs/VLOSEs to maintain a public repository of advertisements.

The announcements related to new researcher access mechanisms mark an important development and opportunity to better understand the information environment. This status report summarizes a subset of mechanisms that are available to European and/or United States researchers today, following, in part VLOPs and VLOSEs measures to comply with the DSA. The report aims at showcasing the existing access modalities and encouraging the use of these mechanisms to study the impact of online platform’s design and decisions on society. The list of mechanisms reviewed is included in the Appendix…(More)”

The Potential of Artificial Intelligence for the SDGs and Official Statistics


Report by Paris21: “Artificial Intelligence (AI) and its impact on people’s lives is growing rapidly. AI is already leading to significant developments from healthcare to education, which can contribute to the efficient monitoring and achievement of the Sustainable Development Goals (SDGs), a call to action to address the world’s greatest challenges. AI is also raising concerns because, if not addressed carefully, its risks may outweigh its benefits. As a result, AI is garnering increasing attention from National Statistical Offices (NSOs) and the official statistics community as they are challenged to produce more, comprehensive, timely, and highquality data for decision-making with limited resources in a rapidly changing world of data and technologies and in light of complex and converging global issues from pandemics to climate change. This paper has been prepared as an input to the “Data and AI for Sustainable Development: Building a Smarter Future” Conference, organized in partnership with The Partnership in Statistics for Development in the 21st Century (PARIS21), the World Bank and the International Monetary Fund (IMF). Building on case studies that examine the use of AI by NSOs, the paper presents the benefits and risks of AI with a focus on NSO operations related to sustainable development. The objective is to spark discussions and to initiate a dialogue around how AI can be leveraged to inform decisions and take action to better monitor and achieve sustainable development, while mitigating its risks…(More)”.

Generative AI in Journalism


Report by Nicholas Diakopoulos et al: “The introduction of ChatGPT by OpenAI in late 2022 captured the imagination of the public—and the news industry—with the potential of generative AI to upend how people create and consume media. Generative AI is a type of artificial intelligence technology that can create new content, such as text, images, audio, video, or other media, based on the data it has been trained on and according to written prompts provided by users. ChatGPT is the chat-based user interface that made the power and potential of generative AI salient to a wide audience, reaching 100 million users within two months of its launch.

Although similar technology had been around, by late 2022 it was suddenly working, spurring its integration into various products and presenting not only a host of opportunities for productivity and new experiences but also some serious concerns about accuracy, provenance and attribution of source information, and the increased potential for creating misinformation.

This report serves as a snapshot of how the news industry has grappled with the initial promises and challenges of generative AI towards the end of 2023. The sample of participants reflects how some of the more savvy and experienced members of the profession are reacting to the technology.

Based on participants’ responses, they found that generative AI is already changing work structure and organization, even as it triggers ethical concerns around use. Here are some key takeaways:

  • Applications in News Production. The most predominant current use cases for generative AI include various forms of textual content production, information gathering and sensemaking, multimedia content production, and business uses.
  • Changing Work Structure and Organization. There are a host of new roles emerging to grapple with the changes introduced by generative AI including for leadership, editorial, product, legal, and engineering positions.
  • Work Redesign. There is an unmet opportunity to design new interfaces to support journalistic work with generative AI, in particular to enable the human oversight needed for the efficient and confident checking and verification of outputs..(More)”

Regulatory experimentation: Moving ahead on the agile regulatory governance agenda


OECD Policy Paper: “This policy paper aims to help governments develop regulatory experimentation constructively and appropriately as part of their implementation of the 2021 OECD Recommendation for Agile Regulatory Governance to Harness Innovation. Regulatory experimentation can help promote adaptive learning and innovative and better-informed regulatory policies and practices. This policy paper examines key concepts, definitions and constitutive elements of regulatory experimentation. It outlines the rationale for using regulatory experimentation, discusses enabling factors and governance requirements, and presents a set of forward-looking conclusions…(More)”.

Data Authenticity, Consent, and Provenance for AI Are All Broken: What Will It Take to Fix Them?


Article by Shayne Longpre et al: “New AI capabilities are owed in large part to massive, widely sourced, and underdocumented training data collections. Dubious collection practices have spurred crises in data transparency, authenticity, consent, privacy, representation, bias, copyright infringement, and the overall development of ethical and trustworthy AI systems. In response, AI regulation is emphasizing the need for training data transparency to understand AI model limitations. Based on a large-scale analysis of the AI training data landscape and existing solutions, we identify the missing infrastructure to facilitate responsible AI development practices. We explain why existing tools for data authenticity, consent, and documentation alone are unable to solve the core problems facing the AI community, and outline how policymakers, developers, and data creators can facilitate responsible AI development, through universal data provenance standards…(More)”.

Sludge Toolkit


About: “Sludge audits are a way to identify, quantify and remove sludge (unnecessary frictions) from government services. Using the NSW Government sludge audit method, you can

  • understand where sludge is making your government service difficult to access
  • quantify the impact of sludge on the community
  • know where and how you can improve your service using behavioural science
  • measure the impact of your service improvements…(More)”.

Creating an Integrated System of Data and Statistics on Household Income, Consumption, and Wealth: Time to Build


Report by the National Academies: “Many federal agencies provide data and statistics on inequality and related aspects of household income, consumption, and wealth (ICW). However, because the information provided by these agencies is often produced using different concepts, underlying data, and methods, the resulting estimates of poverty, inequality, mean and median household income, consumption, and wealth, as well as other statistics, do not always tell a consistent or easily interpretable story. Measures also differ in their accuracy, timeliness, and relevance so that it is difficult to address such questions as the effects of the Great Recession on household finances or of the Covid-19 pandemic and the ensuing relief efforts on household income and consumption. The presence of multiple, sometimes conflicting statistics at best muddies the waters of policy debates and, at worst, enable advocates with different policy perspectives to cherry-pick their preferred set of estimates. Achieving an integrated system of relevant, high-quality, and transparent household ICW data and statistics should go far to reduce disagreement about who has how much, and from what sources. Further, such data are essential to advance research on economic wellbeing and to ensure that policies are well targeted to achieve societal goals…(More)”.

Digital transformation of public services


Policy Brief by Interreg Europe: “In a world of digital advancements, the public sector must undergo a comprehensive digital transformation to enhance service delivery efficiency, improve governance, foster innovation and increase citizen satisfaction.

The European Union is playing a leading role and has been actively developing policy frameworks for the digitalisation of the public sector. This policy brief provides a general overview of the most relevant initiatives, regulations, and strategies of the European Union, which are shaping Europe’s digital future.

The European Union’s strategy for the digital transformation of public services is centred on enhancing accessibility, efficiency, and user-centricity. This strategy also promotes interoperability among Member States, fostering seamless cross-border interactions. Privacy and security measures are integral to building trust in digital public services, with a focus on data protection and cybersecurity. Ultimately, the goal is to create a cohesive, digitally advanced public service ecosystem throughout the EU, with the active participation of the private sector (GovTech).

This policy brief outlines key policy improvements, good practices and recommendations, stemming from the Interreg Europe projects BEST DIHBETTERENAIBLERNext2MetDigital RegionsDigitourismInno ProvementERUDITE, iBuy and Carpe Digem, to inform and guide policymakers to embark upon digital transformation processes successfully, as well as encouraging greater interregional cooperation…(More)”.

AI and the Future of Government: Unexpected Effects and Critical Challenges


Policy Brief by Tiago C. Peixoto, Otaviano Canuto, and Luke Jordan: “Based on observable facts, this policy paper explores some of the less- acknowledged yet critically important ways in which artificial intelligence (AI) may affect the public sector and its role. Our focus is on those areas where AI’s influence might be understated currently, but where it has substantial implications for future government policies and actions.

We identify four main areas of impact that could redefine the public sector role, require new answers from it, or both. These areas are the emergence of a new language-based digital divide, jobs displacement in the public administration, disruptions in revenue mobilization, and declining government responsiveness.

This discussion not only identifies critical areas but also underscores the importance of transcending conventional approaches in tackling them. As we examine these challenges, we shed light on their significance, seeking to inform policymakers and stakeholders about the nuanced ways in which AI may quietly, yet profoundly, alter the public sector landscape…(More)”.

AI Accountability Policy Report


Report by NTIA: “Artificial intelligence (AI) systems are rapidly becoming part of the fabric of everyday American life. From customer service to image generation to manufacturing, AI systems are everywhere.

Alongside their transformative potential for good, AI systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised privacy, safety, and security. Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm….


The AI Accountability Policy Report
 conceives of accountability as a chain of inputs linked to consequences. It focuses on how information flow (documentation, disclosures, and access) supports independent evaluations (including red-teaming and audits), which in turn feed into consequences (including liability and regulation) to create accountability. It concludes with recommendations for federal government action, some of which elaborate on themes in the AI EO, to encourage and possibly require accountability inputs…(More)”.

Graphic showing the AI Accountability Chain model