What is a fair exchange for access to public data?


Blog and policy brief by Jeni Tennison: “The most obvious approach to get companies to share value back to the public sector in return for access to data is to charge them. However, there are a number of challenges with a “pay to access” approach: it’s hard to set the right price; it creates access barriers, particularly for cash-poor start-ups; and it creates a public perception that the government is willing to sell their data, and might be tempted to loosen privacy-protecting governance controls in exchange for cash.

Are there other options? The policy brief explores a range of other approaches and assesses these against five goals that a value-sharing framework should ideally meet, to:

  • Encourage use of public data, including by being easy for organisations to understand and administer.
  • Provide a return on investment for the public sector, offsetting at least some of the costs of supporting the NDL infrastructure and minimising administrative costs.
  • Promote equitable innovation and economic growth in the UK, which might mean particularly encouraging smaller, home-grown businesses.
  • Create social value, particularly towards this Government’s other missions, such as achieving Net Zero or unlocking opportunity for all.
  • Build public trust by being easily explainable, avoiding misaligned incentives that encourage the breaking of governance guardrails, and feeling like a fair exchange.

In brief, alternatives to a pay-to-access model that still provide direct financial returns include:

  • Discounts: the public sector could secure discounts on products and services created using public data. However, this could be difficult to administer and enforce.
  • Royalties: taking a percentage of charges for products and services created using public data might be similarly hard to administer and enforce, but applies to more companies.
  • Equity: taking equity in startups can provide long-term returns and align with public investment goals.
  • Levies: targeted taxes on businesses that use public data can provide predictable revenue and encourage data use.
  • General taxation: general taxation can fund data infrastructure, but it may lack the targeted approach and public visibility of other methods.

It’s also useful to consider non-financial conditions that could be put on organisations accessing public data..(More)”.

A crowd-sourced repository for valuable government data


About: “DataLumos is an ICPSR archive for valuable government data resources. ICPSR has a long commitment to safekeeping and disseminating US government and other social science data. DataLumos accepts deposits of public data resources from the community and recommendations of public data resources that ICPSR itself might add to DataLumos. Please consider making a monetary donation to sustain DataLumos…(More)”.

The Age of AI in the Life Sciences: Benefits and Biosecurity Considerations


Report by the National Academies of Sciences, Engineering, and Medicine: “Artificial intelligence (AI) applications in the life sciences have the potential to enable advances in biological discovery and design at a faster pace and efficiency than is possible with classical experimental approaches alone. At the same time, AI-enabled biological tools developed for beneficial applications could potentially be misused for harmful purposes. Although the creation of biological weapons is not a new concept or risk, the potential for AI-enabled biological tools to affect this risk has raised concerns during the past decade.

This report, as requested by the Department of Defense, assesses how AI-enabled biological tools could uniquely impact biosecurity risk, and how advancements in such tools could also be used to mitigate these risks. The Age of AI in the Life Sciences reviews the capabilities of AI-enabled biological tools and can be used in conjunction with the 2018 National Academies report, Biodefense in the Age of Synthetic Biology, which sets out a framework for identifying the different risk factors associated with synthetic biology capabilities…(More)”

Being heard: Shaping digital futures for and with children


Blog by Laura Betancourt Basallo, Kim R. Sylwander and Sonia Livingstone: “One in three internet users is a child. Digital technologies are shaping children’s present and future, yet most digital spaces are designed by adults, for adults. Despite this disconnect, digital platforms have emerged as important spaces for children’s participation in political and cultural life, partly because this is often limited in traditional spaces.

Children’s access to and participation in the digital environment is not just desirable: the UN Convention on the Rights of the Child applies equally online and offline. Article 12 outlines children’s right to be heard in ways that genuinely influence the decisions affecting their lives. In 2021, the Committee on the Rights of the Child published its General comment No. 25, the authoritative framework on how children’s rights should be applied in relation to the digital environment—this emphasises the importance of children’s right to be heard, and to participation in the digital sphere.

Core elements for meaningful participation

Creating meaningful and rights-respecting opportunities for child and youth participation in research, policymaking, and product design demands strategic planning and practical actions. As scholar Laura Lundy explains, these opportunities should guarantee to children:

  • SPACE: Children must be allowed to express their views.
  • VOICE: Children must be facilitated to express their views.
  • AUDIENCE: Their views must be listened to.
  • INFLUENCE: Their views must be acted upon as appropriate.

This rights-based approach emphasises the importance of not just collecting children’s views but actively listening to them and ensuring that their input is meaningfully acted upon, while avoiding the pitfalls of tokenism, manipulation or unsafe practices. Implementing such engagement requires careful consideration of safeguards regarding privacy, freedom of thought, and inclusive access for children with limited digital skills or access.

Here we provide a curated list of resources to conduct consultations with children, using digital technologies and then about the digital environment. ..(More)”.

Nudges and Nudging: A User’s Manual


Paper by Cass Sunstein: “Many policies take the form of nudges, defined as liberty-preserving approaches that steer people in particular directions, but that also allow them to go their own way Some nudges attempt to correct self-control problems. Some nudges attempt to counteract unrealistic optimism. Some nudges attempt to correct present bias. Some nudges attempt to correct market failures, as when people are nudged not to emit air pollution. For every conventional market failure, there is a potential nudge. For every behavioral bias (optimistic bias, present bias, availability bias, limited attention), there is a responsive nudge. There are many misconceptions about nudges and nudging, and they are a diversion…(More)”.

Can Real-Time Metrics Fill China’s Data Gap?


Case-study by Danielle Goldfarb: “After Chinese authorities abruptly reversed the country’s zero-COVID policy in 2022, global policymakers needed a clear and timely picture of the economic and health fallout.

China’s economy is the world’s second largest and the country has deep global links, so an accurate picture of its trajectory mattered for global health, growth and inflation. Getting a solid read was a challenge, however, since official health and economic data not only were not timely, but were widely viewed as unreliable.

There are now vast amounts and varied types of digital data available, from satellite images to social media text to online payments; these, along with advances in artificial intelligence (AI), make it possible to collect and analyze digital data in ways previously impossible.

Could these new tools help governments and global institutions refute or confirm China’s official picture and gather more timely intelligence?..(More)”.

Launch: A Blueprint to Unlock New Data Commons for Artificial Intelligence (AI)


Blueprint by Hannah Chafetz, Andrew J. Zahuranec, and Stefaan Verhulst: “In today’s rapidly evolving AI landscape, it is critical to broaden access to diverse and high-quality data to ensure that AI applications can serve all communities equitably. Yet, we are on the brink of a potential “data winter,” where valuable data assets that could drive public good are increasingly locked away or inaccessible.

Data commons — collaboratively governed ecosystems that enable responsible sharing of diverse datasets across sectors — offer a promising solution. By pooling data under clear standards and shared governance, data commons can unlock the potential of AI for public benefit while ensuring that its development reflects the diversity of experiences and needs across society.

To accelerate the creation of data commons, The Open Data Policy, today, releases “A Blueprint to Unlock New Data Commons for AI” — a guide on how to steward data to create data commons that enable public-interest AI use cases…the document is aimed at supporting libraries, universities, research centers, and other data holders (e.g. governments and nonprofits) through four modules:

  • Mapping the Demand and Supply: Understanding why AI systems need data, what data can be made available to train, adapt, or augment AI, and what a viable data commons prototype might look like that incorporates stakeholder needs and values;
  • Unlocking Participatory Governance: Co-designing key aspects of the data commons with key stakeholders and documenting these aspects within a formal agreement;
  • Building the Commons: Establishing the data commons from a practical perspective and ensure all stakeholders are incentivized to implement it; and
  • Assessing and Iterating: Evaluating how the commons is working and iterating as needed.

These modules are further supported by two supplementary taxonomies. “The Taxonomy of Data Types” provides a list of data types that can be valuable for public-interest generative AI use cases. The “Taxonomy of Use Cases” outlines public-interest generative AI applications that can be developed using a data commons approach, along with possible outcomes and stakeholders involved.

A separate set of worksheets can be used to further guide organizations in deploying these tools…(More)”.

A Funder’s Guide to Citizens’ Assemblies


Democracy Funders Network: “For too many Americans, the prospect of engaging with lawmakers about the important issues in their lives is either logistically inaccessible, or unsatisfactory in result. Exploring An Innovative Approach to Democratic Governance: A Funder’s Guide to Citizens’ Assemblies, produced by Democracy Funders Network and New America, explores the potential for citizens’ assemblies to transform and strengthen democratic processes in the U.S. The guide offers philanthropists and in-depth look at the potential opportunities and challenges citizens’ assemblies present for building civic power at the local level and fomenting authentic civic engagement within communities.

Citizens’ assemblies belong in the broader field of collaborative governance, an umbrella term for public engagement that shifts governing power and builds trust by bringing together government officials and community members to collaborate on policy outcomes through shared decision-making…(More)”.

Vetted Researcher Data Access


Coimisiún na Meán: “Article 40 of the Digital Services Act (DSA) makes provision for researchers to access data from Very Large Online Platforms (VLOPs) or Very Large Online Search Engines (VLOSEs) for the purposes of studying systemic risk in the EU and assessing mitigation measures. There are two ways that researchers that are studying systemic risk in the EU can get access to data under Article 40 of the DSA. 

Non-public data, known as “vetted researcher data access”, under Article 40(4)-(11). This is a process where a researcher, who has been vetted or assessed by a Digital Services Coordinator to have met the criteria as set out in DSA Article 40(8), can request access to non-public data held by a VLOP/VLOSE. The data must be limited in scope and deemed necessary and proportionate to the purpose of the research.

Public data under Article 40(12).  This is a process where a researcher who meets the relevant criteria can apply for data access directly from a VLOP/VLOSE, for example, access to a content library or API of public posts…(More)”.

Funding the Future: Grantmakers Strategies in AI Investment


Report by Project Evident: “…looks at how philanthropic funders are approaching requests to fund the use of AI… there was common recognition of AI’s importance and the tension between the need to learn more and to act quickly to meet the pace of innovation, adoption, and use of AI tools.

This research builds on the work of a February 2024 Project Evident and Stanford Institute for Human-Centered Artificial Intelligence working paper, Inspiring Action: Identifying the Social Sector AI Opportunity Gap. That paper reported that more practitioners than funders (by over a third) claimed their organization utilized AI. 

“From our earlier research, as well as in conversations with funders and nonprofits, it’s clear there’s a mismatch in the understanding and desire for AI tools and the funding of AI tools,” said Sarah Di Troia, Managing Director of Project Evident’s OutcomesAI practice and author of the report. “Grantmakers have an opportunity to quickly upskill their understanding – to help nonprofits improve their efficiency and impact, of course, but especially to shape the role of AI in civil society.”

The report offers a number of recommendations to the philanthropic sector. For example, funders and practitioners should ensure that community voice is included in the implementation of new AI initiatives to build trust and help reduce bias. Grantmakers should consider funding that allows for flexibility and innovation so that the social and education sectors can experiment with approaches. Most importantly, funders should increase their capacity and confidence in assessing AI implementation requests along both technical and ethical criteria…(More)”.