Announcing SPARROW: A Breakthrough AI Tool to Measure and Protect Earth’s Biodiversity in the Most Remote Places


Blog by Juan Lavista Ferres: “The biodiversity of our planet is rapidly declining. We’ve likely reached a tipping point where it is crucial to use every tool at our disposal to help preserve what remains. That’s why I am pleased to announce SPARROW—Solar-Powered Acoustic and Remote Recording Observation Watch, developed by Microsoft’s AI for Good Lab. SPARROW is an AI-powered edge computing solution designed to operate autonomously in the most remote corners of the planet. Solar-powered and equipped with advanced sensors, it collects biodiversity data—from camera traps, acoustic monitors, and other environmental detectors—that are processed using our most advanced PyTorch-based wildlife AI models on low-energy edge GPUs. The resulting critical information is then transmitted via low-Earth orbit satellites directly to the cloud, allowing researchers to access fresh, actionable insights in real time, no matter where they are. 

Think of SPARROW as a network of Earth-bound satellites, quietly observing and reporting on the health of our ecosystems without disrupting them. By leveraging solar energy, these devices can run for a long time, minimizing their footprint and any potential harm to the environment…(More)”.

It Was the Best of Times, It Was the Worst of Times: The Dual Realities of Data Access in the Age of Generative AI


Article by Stefaan Verhulst: “It was the best of times, it was the worst of times… It was the spring of hope, it was the winter of despair.” –Charles Dickens, A Tale of Two Cities

Charles Dickens’s famous line captures the contradictions of the present moment in the world of data. On the one hand, data has become central to addressing humanity’s most pressing challenges — climate change, healthcare, economic development, public policy, and scientific discovery. On the other hand, despite the unprecedented quantity of data being generated, significant obstacles remain to accessing and reusing it. As our digital ecosystems evolve, including the rapid advances in artificial intelligence, we find ourselves both on the verge of a golden era of open data and at risk of slipping deeper into a restrictive “data winter.”

A Tale of Two Cities by Charles Dickens (1902)

These two realities are concurrent: the challenges posed by growing restrictions on data reuse, and the countervailing potential brought by advancements in privacy-enhancing technologies (PETs), synthetic data, and data commons approaches. It argues that while current trends toward closed data ecosystems threaten innovation, new technologies and frameworks could lead to a “Fourth Wave of Open Data,” potentially ushering in a new era of data accessibility and collaboration…(More)” (First Published in Industry Data for Society Partnership’s (IDSP) 2024 Year in Review).

The Emergence of National Data Initiatives: Comparing proposals and initiatives in the United Kingdom, Germany, and the United States


Article by Stefaan Verhulst and Roshni Singh: “Governments are increasingly recognizing data as a pivotal asset for driving economic growth, enhancing public service delivery, and fostering research and innovation. This recognition has intensified as policymakers acknowledge that data serves as the foundational element of artificial intelligence (AI) and that advancing AI sovereignty necessitates a robust data ecosystem. However, substantial portions of generated data remain inaccessible or underutilized. In response, several nations are initiating or exploring the launch of comprehensive national data strategies designed to consolidate, manage, and utilize data more effectively and at scale. As these initiatives evolve, discernible patterns in their objectives, governance structures, data-sharing mechanisms, and stakeholder engagement frameworks reveal both shared principles and country-specific approaches.

This blog seeks to start some initial research on the emergence of national data initiatives by examining three national data initiatives and exploring their strategic orientations and broader implications. They include:

Impact Inversion


Blog by Victor Zhenyi Wang: “The very first project I worked on when I transitioned from commercial data science to development was during the nadir between South Africa’s first two COVID waves. A large international foundation was interested in working with the South African government and a technology non-profit to build an early warning system for COVID. The non-profit operated a WhatsApp based health messaging service that served about 2 million people in South Africa. The platform had run a COVID symptoms questionnaire which the foundation hoped could help the government predict surges in cases.

This kind of data-based “nowcasting” proved a useful tool in a number of other places e.g. some cities in the US. Yet in the context of South Africa, where the National Department of Health was mired in serious capacity constraints, government stakeholders were bearish about the usefulness of such a tool. Nonetheless, since the foundation was interested in funding this project, we went ahead with it anyway. The result was that we pitched this “early warning system” a handful of times to polite public health officials but it was otherwise never used. A classic case of development practitioners rendering problems technical and generating non-solutions that primarily serve the strategic objectives of the funders.

The technology non-profit did however express interest in a different kind of service — what about a language model that helps users answer questions about COVID? The non-profit’s WhatsApp messaging service is menu-based and they thought that a natural language interface could provide a better experience for users by letting them engage with health content on their own terms. Since we had ample funding from the foundation for the early warning system, we decided to pursue the chatbot project.

The project has now spanned to multiple other services run by the same non-profit, including the largest digital health service in South Africa. The project has won multiple grants and partnerships, including with Google, and has spun out into its own open source library. In many ways, in terms of sheer number of lives affected, this is the most impactful project I have had the privilege of supporting in my career in development, and I am deeply grateful to have been part of the team involved bringing it into existence.

Yet the truth is, the “impact” of this class of interventions remain unclear. Even though a large randomized controlled trial was done to assess the impact of the WhatsApp service, such an evaluation only captures the performance of the service on outcome variables determined by the non-profit, not on whether these outcomes are appropriate. It certainly does not tell us whether the service was the best means available to achieve the ultimate goal of improving the lives of those in communities underserved by health services.

This project, and many others that I have worked on as a data scientist in development, uses an implicit framework for impact which I describe as the design-to-impact pipeline. A technology is designed and developed, then its impact is assessed on the world. There is a strong emphasis to reform, to improve the design, development, and deployment of development technologies. Development practitioners have a broad range of techniques to make sure that the process of creation is ethical and responsible — in some sense, legitimate. With the broad adoption of data-based methods of program evaluation, e.g. randomized control trials, we might even make knowledge claims that an intervention truly ought to bring certain benefits to communities in which the intervention is placed. This view imagines that technologies, once this process is completed, is simply unleashed onto the world, and its impact is simply what was assessed ex ante. An industry of monitoring and evaluation surrounds its subsequent deployment; the relative success of interventions depends on the performance of benchmark indicators…(More)”.

The Emergent Landscape of Data Commons: A Brief Survey and Comparison of Existing Initiatives


Article by Stefaan G. Verhulst and Hannah Chafetz: With the increased attention on the need for data to advance AI, data commons initiatives around the world are redefining how data can be accessed, and re-used for societal benefit. These initiatives focus on generating access to data from various sources for a public purpose and are governed by communities themselves. While diverse in focus–from health and mobility to language and environmental data–data commons are united by a common goal: democratizing access to data to fuel innovation and tackle global challenges.

This includes innovation in the context of artificial intelligence (AI). Data commons are providing the framework to make pools of diverse data available in machine understandable formats for responsible AI development and deployment. By providing access to high quality data sources with open licensing, data commons can help increase the quantity of training data in a less exploitative fashion, minimize AI providers’ reliance on data extracted across the internet without an open license, and increase the quality of the AI output (while reducing mis-information).

Over the last few months, the Open Data Policy Lab (a collaboration between The GovLab and Microsoft) has conducted various research initiatives to explore these topics further and understand:

(1) how the concept of a data commons is changing in the context of artificial intelligence, and

(2) current efforts to advance the next generation of data commons.

In what follows we provide a summary of our findings thus far. We hope it inspires more data commons use cases for responsible AI innovation in the public’s interest…(More)”.

Two Open Science Foundations: Data Commons and Stewardship as Pillars for Advancing the FAIR Principles and Tackling Planetary Challenges


Article by Stefaan Verhulst and Jean Claude Burgelman: “Today the world is facing three major planetary challenges: war and peace, steering Artificial Intelligence and making the planet a healthy Anthropoceen. As they are closely interrelated, they represent an era of “polycrisis”, to use the term Adam Tooze has coined. There are no simple solutions or quick fixes to these (and other) challenges; their interdependencies demand a multi-stakeholder, interdisciplinary approach.

As world leaders and experts convene in Baku for The 29th session of the Conference of the Parties to the United Nations Framework Convention on Climate Change (COP29), the urgency of addressing these global crises has never been clearer. A crucial part of addressing these challenges lies in advancing science — particularly open science, underpinned by data made available leveraging the FAIR principles (Findable, Accessible, Interoperable, and Reusable). In this era of computation, the transformative potential of research depends on the seamless flow and reuse of high-quality data to unlock breakthrough insights and solutions. Ensuring data is available in reusable, interoperable formats not only accelerates the pace of scientific discovery but also expedites the search for solutions to global crises.

Image of the retreat of the Columbia glacier by Jesse Allen, using Landsat data from the U.S. Geological Survey. Free to re-use from NASA Visible Earth.

While FAIR principles provide a vital foundation for making data accessible, interoperable and reusable, translating these principles into practice requires robust institutional approaches. Toward that end, in the below, we argue two foundational pillars must be strengthened:

  • Establishing Data Commons: The need for shared data ecosystems where resources can be pooled, accessed, and re-used collectively, breaking down silos and fostering cross-disciplinary collaboration.
  • Enabling Data Stewardship: Systematic and responsible data reuse requires more than access; it demands stewardship — equipping institutions and scientists with the capabilities to maximize the value of data while safeguarding its responsible use is essential…(More)”.

People-centred and participatory policymaking


Blog by the UK Policy Lab: “…Different policies can play out in radically different ways depending on circumstance and place. Accordingly it is important for policy professionals to have access to a diverse suite of people-centred methods, from gentle and compassionate techniques that increase understanding with small groups of people to higher-profile, larger-scale engagements. The image below shows a spectrum of people-centred and participatory methods that can be used in policy, ranging from light-touch involvement (e.g. consultation), to structured deliberation (e.g. citizens’ assemblies) and deeper collaboration and empowerment (e.g. participatory budgeting). This spectrum of participation is speculatively mapped against stages of the policy cycle…(More)”.

How to evaluate statistical claims


Blog by Sean Trott: “…The goal of this post is to distill what I take to be the most important, immediately applicable, and generalizable insights from these classes. That means that readers should be able to apply those insights without a background in math or knowing how to, say, build a linear model in R. In that way, it’ll be similar to my previous post about “useful cognitive lenses to see through”, but with a greater focus on evaluating claims specifically.

Lesson #1: Consider the whole distribution, not just the central tendency.

If you spend much time reading news articles or social media posts, the odds are good you’ll encounter some descriptive statistics: numbers summarizing or describing a distribution (a set of numbers or values in a dataset). One of the most commonly used descriptive statistics is the arithmetic mean: the sum of every value in a distribution, divided by the number of values overall. The arithmetic mean is a measure of “central tendency”, which just means it’s a way to characterize the typical or expected value in that distribution.

The arithmetic mean is a really useful measure. But as many readers might already know, it’s not perfect. It’s strongly affected by outliers—values that are really different from the rest of the distribution—and things like the skew of a distribution (see the image below for examples of skewed distribution).

Three different distributions. Leftmost is a roughly “normal” distribution; middle is a “right-skewed” distribution; and rightmost is a “left-skewed” distribution.

In particular, the mean is pulled in the direction of outliers or distribution skew. That’s the logic behind the joke about the average salary of people at a bar jumping up as soon as a billionaire walks in. It’s also why other measures of central tendency, such as the median, are often presented alongside (or instead of) the mean—especially for distributions that happen to be very skewed, such as income or wealth.

It’s not that one of these measures is more “correct”. As Stephen Jay Gould wrote in his article The Median Is Not the Message, they’re just different perspectives on the same distribution:

A politician in power might say with pride, “The mean income of our citizens is $15,000 per year.” The leader of the opposition might retort, “But half our citizens make less than $10,000 per year.” Both are right, but neither cites a statistic with impassive objectivity. The first invokes a mean, the second a median. (Means are higher than medians in such cases because one millionaire may outweigh hundreds of poor people in setting a mean, but can balance only one mendicant in calculating a median.)..(More)”

Access, Signal, Action: Data Stewardship Lessons from Valencia’s Floods


Article by Marta Poblet, Stefaan Verhulst, and Anna Colom: “Valencia has a rich history in water management, a legacy shaped by both triumphs and tragedies. This connection to water is embedded in the city’s identity, yet modern floods test its resilience in new ways.

During the recent floods, Valencians experienced a troubling paradox. In today’s connected world, digital information flows through traditional and social media, weather apps, and government alert systems designed to warn us of danger and guide rapid responses. Despite this abundance of data, a tragedy unfolded last month in Valencia. This raises a crucial question: how can we ensure access to the right data, filter it for critical signals, and transform those signals into timely, effective action?

Data stewardship becomes essential in this process.

In particular, the devastating floods in Valencia underscore the importance of:

  • having access to data to strengthen the signal (first mile challenges)
  • separating signal from noise
  • translating signal into action (last mile challenges)…(More)”.

Mini-publics and the public: challenges and opportunities


Conversation between Sarah Castell and Stephen Elstub: “…there’s a real problem here: the public are only going to get to know about a mini-public if it gets media coverage, but the media will only cover it if it makes an impact. But it’s more likely to make an impact if the public are aware of it. That’s a tension that mini-publics need to overcome, because it’s important that they reach out to the public. Ultimately it doesn’t matter how inclusive the recruitment is and how well it’s done. It doesn’t matter how well designed the process is. It is still a small number of people involved, so we want mini-publics to be able to influence public opinion and stimulate public debate. And if they can do that, then it’s more likely to affect elite opinion and debate as well, and possibly policy.

One more thing is that, people in power aren’t in the habit of sharing power. And that’s why it’s very difficult. I think the politicians are mainly motivated around this because they hope it’s going to look good to the electorate and get them some votes, but they are also worried about low levels of trust in society and what the ramifications of that might be. But in general, people in power don’t give it away very easily…

Part of the problem is that a lot of the research around public views on deliberative processes was done through experiments. It is useful, but it doesn’t quite tell us what will happen when mini-publics are communicated to the public in the messy real public sphere. Previously, there just weren’t that many well-known cases that we could actually do field research on. But that is starting to change.

There’s also more interdisciplinary work needed in this area. We need to improve how communication strategies around citizens’ assembly are done – there must be work that’s relevant in political communication studies and other fields who have this kind of insight…(More)”.