Ethical implications related to processing of personal data and artificial intelligence in humanitarian crises: a scoping review


Paper by Tino Kreutzer et al: “Humanitarian organizations are rapidly expanding their use of data in the pursuit of operational gains in effectiveness and efficiency. Ethical risks, particularly from artificial intelligence (AI) data processing, are increasingly recognized yet inadequately addressed by current humanitarian data protection guidelines. This study reports on a scoping review that maps the range of ethical issues that have been raised in the academic literature regarding data processing of people affected by humanitarian crises….

We identified 16,200 unique records and retained 218 relevant studies. Nearly one in three (n = 66) discussed technologies related to AI. Seventeen studies included an author from a lower-middle income country while four included an author from a low-income country. We identified 22 ethical issues which were then grouped along the four ethical value categories of autonomy, beneficence, non-maleficence, and justice. Slightly over half of included studies (n = 113) identified ethical issues based on real-world examples. The most-cited ethical issue (n = 134) was a concern for privacy in cases where personal or sensitive data might be inadvertently shared with third parties. Aside from AI, the technologies most frequently discussed in these studies included social media, crowdsourcing, and mapping tools.

Studies highlight significant concerns that data processing in humanitarian contexts can cause additional harm, may not provide direct benefits, may limit affected populations’ autonomy, and can lead to the unfair distribution of scarce resources. The increase in AI tool deployment for humanitarian assistance amplifies these concerns. Urgent development of specific, comprehensive guidelines, training, and auditing methods is required to address these ethical challenges. Moreover, empirical research from low and middle-income countries, disproportionally affected by humanitarian crises, is vital to ensure inclusive and diverse perspectives. This research should focus on the ethical implications of both emerging AI systems, as well as established humanitarian data management practices…(More)”.

Federated learning for children’s data


Article by Roy Saurabh: “Across the world, governments are prioritizing the protection of citizens’ data – especially that of children. New laws, dedicated data protection authorities, and digital infrastructure initiatives reflect a growing recognition that data is not just an asset, but a foundation for public trust. 

Yet a major challenge remains: how can governments use sensitive data to improve outcomes – such as in education – without undermining the very privacy protections they are committed to uphold?

One promising answer lies in federated, governance-aware approaches to data use. But realizing this potential requires more than new technology; it demands robust data governance frameworks designed from the outset.

Data governance: The missing link

In many countries, ministries of education, health, and social protection each hold pieces of the puzzle that together could provide a more complete picture of children’s learning and well-being. For example, a child’s school attendance, nutritional status, and family circumstances all shape their ability to thrive, yet these records are kept in separate systems.

Efforts to combine such data often run into legal and technical barriers. Centralized data lakes raise concerns about consent, security, and compliance with privacy laws. In fact, many international standards stress the principle of data minimization – the idea that personal information should not be gathered or combined unnecessarily. 

“In many countries, ministries of education, health, and social protection each hold pieces of the puzzle that together could provide a more complete picture of children’s learning and well-being.”

This is where the right data governance frameworks become essential. Effective governance defines clear rules about how data can be accessed, shared, and used – specifying who has the authority, what purposes are permitted, and how rights are protected. These frameworks make it possible to collaborate with data responsibly, especially when it comes to children…(More)”

Humanitarian aid depends on good data: what’s wrong with the way it’s collected


Article by Vicki Squire: The defunding of the US Agency for International Development (USAID), along with reductions in aid from the UK and elsewhere, raises questions about the continued collection of data that helps inform humanitarian efforts.

Humanitarian response plans rely on accurate, accessible and up-to-date data. Aid organisations use this to review needs, monitor health and famine risks, and ensure security and access for humanitarian operations.

The reliance on data – and in particular large-scale digitalised data – has intensified in the humanitarian sector over the past few decades. Major donors all proclaim a commitment to evidence-based decision making. The International Organization for Migration’s Displacement Tracking Matrix and the REACH impact initiative are two examples designed to improve operational and strategic awareness of key needs and risks.

Humanitarian data streams have already been affected by USAID cuts. For example, the Famine Early Warning Systems Network was abruptly closed, while the Demographic and Health Surveys programme was “paused”. The latter informed global health policies in areas ranging from maternal health and domestic violence to anaemia and HIV prevalence.

The loss of reliable, accessible and up-to-date data threatens monitoring capacity and early warning systems, while reducing humanitarian access and rendering security failures more likely…(More)”.

Data Commons: The Missing Infrastructure for Public Interest Artificial Intelligence


Article by Stefaan Verhulst, Burton Davis and Andrew Schroeder: “Artificial intelligence is celebrated as the defining technology of our time. From ChatGPT to Copilot and beyond, generative AI systems are reshaping how we work, learn, and govern. But behind the headline-grabbing breakthroughs lies a fundamental problem: The data these systems depend on to produce useful results that serve the public interest is increasingly out of reach.

Without access to diverse, high-quality datasets, AI models risk reinforcing bias, deepening inequality, and returning less accurate, more imprecise results. Yet, access to data remains fragmented, siloed, and increasingly enclosed. What was once open—government records, scientific research, public media—is now locked away by proprietary terms, outdated policies, or simple neglect. We are entering a data winter just as AI’s influence over public life is heating up.

This isn’t just a technical glitch. It’s a structural failure. What we urgently need is new infrastructure: data commons.

A data commons is a shared pool of data resources—responsibly governed, managed using participatory approaches, and made available for reuse in the public interest. Done correctly, commons can ensure that communities and other networks have a say in how their data is used, that public interest organizations can access the data they need, and that the benefits of AI can be applied to meet societal challenges.

Commons offer a practical response to the paradox of data scarcity amid abundance. By pooling datasets across organizations—governments, universities, libraries, and more—they match data supply with real-world demand, making it easier to build AI that responds to public needs.

We’re already seeing early signs of what this future might look like. Projects like Common Corpus, MLCommons, and Harvard’s Institutional Data Initiative show how diverse institutions can collaborate to make data both accessible and accountable. These initiatives emphasize open standards, participatory governance, and responsible reuse. They challenge the idea that data must be either locked up or left unprotected, offering a third way rooted in shared value and public purpose.

But the pace of progress isn’t matching the urgency of the moment. While policymakers debate AI regulation, they often ignore the infrastructure that makes public interest applications possible in the first place. Without better access to high-quality, responsibly governed data, AI for the common good will remain more aspiration than reality.

That’s why we’re launching The New Commons Challenge—a call to action for universities, libraries, civil society, and technologists to build data ecosystems that fuel public-interest AI…(More)”.

Global data-driven prediction of fire activity


Paper by Francesca Di Giuseppe, Joe McNorton, Anna Lombardi & Fredrik Wetterhall: “Recent advancements in machine learning (ML) have expanded the potential use across scientific applications, including weather and hazard forecasting. The ability of these methods to extract information from diverse and novel data types enables the transition from forecasting fire weather, to predicting actual fire activity. In this study we demonstrate that this shift is feasible also within an operational context. Traditional methods of fire forecasts tend to over predict high fire danger, particularly in fuel limited biomes, often resulting in false alarms. By using data on fuel characteristics, ignitions and observed fire activity, data-driven predictions reduce the false-alarm rate of high-danger forecasts, enhancing their accuracy. This is made possible by high quality global datasets of fuel evolution and fire detection. We find that the quality of input data is more important when improving forecasts than the complexity of the ML architecture. While the focus on ML advancements is often justified, our findings highlight the importance of investing in high-quality data and, where necessary create it through physical models. Neglecting this aspect would undermine the potential gains from ML-based approaches, emphasizing that data quality is essential to achieve meaningful progress in fire activity forecasting…(More)”.

New AI Collaboratives to take action on wildfires and food insecurity


Google: “…last September we introduced AI Collaboratives, a new funding approach designed to unite public, private and nonprofit organizations, and researchers, to create AI-powered solutions to help people around the world.

Today, we’re sharing more about our first two focus areas for AI Collaboratives: Wildfires and Food Security.

Wildfires are a global crisis, claiming more than 300,000 lives due to smoke exposure annually and causing billions of dollars in economic damage. …Google.org has convened more than 15 organizations, including Earth Fire Alliance and Moore Foundation, to help in this important effort. By coordinating funding and integrating cutting-edge science, emerging technology and on-the-ground applications, we can provide collaborators with the tools they need to identify and track wildfires in near real time; quantify wildfire risk; shift more acreage to beneficial fires; and ultimately reduce the damage caused by catastrophic wildfires.

Nearly one-third of the world’s population faces moderate or severe food insecurity due to extreme weather, conflict and economic shocks. The AI Collaborative: Food Security will strengthen the resilience of global food systems and improve food security for the world’s most vulnerable populations through AI technologies, collaborative research, data-sharing and coordinated action. To date, 10 organizations have joined us in this effort, and we’ll share more updates soon…(More)”.

A US-run system alerts the world to famines. It’s gone dark after Trump slashed foreign aid


Article by Lauren Kent: “A vital, US-run monitoring system focused on spotting food crises before they turn into famines has gone dark after the Trump administration slashed foreign aid.

The Famine Early Warning Systems Network (FEWS NET) monitors drought, crop production, food prices and other indicators in order to forecast food insecurity in more than 30 countries…Now, its work to prevent hunger in Sudan, South Sudan, Somalia, Yemen, Ethiopia, Afghanistan and many other nations has been stopped amid the Trump administration’s effort to dismantle the US Agency for International Development (USAID).

“These are the most acutely food insecure countries around the globe,” said Tanya Boudreau, the former manager of the project.

Amid the aid freeze, FEWS NET has no funding to pay staff in Washington or those working on the ground. The website is down. And its treasure trove of data that underpinned global analysis on food security – used by researchers around the world – has been pulled offline.

FEWS NET is considered the gold-standard in the sector, and it publishes more frequent updates than other global monitoring efforts. Those frequent reports and projections are key, experts say, because food crises evolve over time, meaning early interventions save lives and save money…The team at the University of Colorado Boulder has built a model to forecast water demand in Kenya, which feeds some data into the FEWS NET project but also relies on FEWS NET data provided by other research teams.

The data is layered and complex. And scientists say pulling the data hosted by the US disrupts other research and famine-prevention work conducted by universities and governments across the globe.

“It compromises our models, and our ability to be able to provide accurate forecasts of ground water use,” Denis Muthike, a Kenyan scientist and assistant research professor at UC Boulder, told CNN, adding: “You cannot talk about food security without water security as well.”

“Imagine that that data is available to regions like Africa and has been utilized for years and years – decades – to help inform divisions that mitigate catastrophic impacts from weather and climate events, and you’re taking that away from the region,” Muthike said. He cautioned that it would take many years to build another monitoring service that could reach the same level…(More)”.

AI could supercharge human collective intelligence in everything from disaster relief to medical research


Article by Hao Cui and Taha Yasseri: “Imagine a large city recovering from a devastating hurricane. Roads are flooded, the power is down, and local authorities are overwhelmed. Emergency responders are doing their best, but the chaos is massive.

AI-controlled drones survey the damage from above, while intelligent systems process satellite images and data from sensors on the ground and air to identify which neighbourhoods are most vulnerable.

Meanwhile, AI-equipped robots are deployed to deliver food, water and medical supplies into areas that human responders can’t reach. Emergency teams, guided and coordinated by AI and the insights it produces, are able to prioritise their efforts, sending rescue squads where they’re needed most.

This is no longer the realm of science fiction. In a recent paper published in the journal Patterns, we argue that it’s an emerging and inevitable reality.

Collective intelligence is the shared intelligence of a group or groups of people working together. Different groups of people with diverse skills, such as firefighters and drone operators, for instance, work together to generate better ideas and solutions. AI can enhance this human collective intelligence, and transform how we approach large-scale crises. It’s a form of what’s called hybrid collective intelligence.

Instead of simply relying on human intuition or traditional tools, experts can use AI to process vast amounts of data, identify patterns and make predictions. By enhancing human decision-making, AI systems offer faster and more accurate insights – whether in medical research, disaster response, or environmental protection.

AI can do this, by for example, processing large datasets and uncovering insights that would take much longer for humans to identify. AI can also get involved in physical tasks. In manufacturing, AI-powered robots can automate assembly lines, helping improve efficiency and reduce downtime.

Equally crucial is information exchange, where AI enhances the flow of information, helping human teams coordinate more effectively and make data-driven decisions faster. Finally, AI can act as social catalysts to facilitate more effective collaboration within human teams or even help build hybrid teams of humans and machines working alongside one another…(More)”.

When forecasting and foresight meet data and innovation: toward a taxonomy of anticipatory methods for migration policy


Paper by Sara Marcucci, Stefaan Verhulst and María Esther Cervantes: “The various global refugee and migration events of the last few years underscore the need for advancing anticipatory strategies in migration policy. The struggle to manage large inflows (or outflows) highlights the demand for proactive measures based on a sense of the future. Anticipatory methods, ranging from predictive models to foresight techniques, emerge as valuable tools for policymakers. These methods, now bolstered by advancements in technology and leveraging nontraditional data sources, can offer a pathway to develop more precise, responsive, and forward-thinking policies.

This paper seeks to map out the rapidly evolving domain of anticipatory methods in the realm of migration policy, capturing the trend toward integrating quantitative and qualitative methodologies and harnessing novel tools and data. It introduces a new taxonomy designed to organize these methods into three core categories: Experience-based, Exploration-based, and Expertise-based. This classification aims to guide policymakers in selecting the most suitable methods for specific contexts or questions, thereby enhancing migration policies…(More)”

Combine AI with citizen science to fight poverty


Nature Editorial: “Of the myriad applications of artificial intelligence (AI), its use in humanitarian assistance is underappreciated. In 2020, during the COVID-19 pandemic, Togo’s government used AI tools to identify tens of thousands of households that needed money to buy food, as Nature reports in a News Feature this week. Typically, potential recipients of such payments would be identified when they apply for welfare schemes, or through household surveys of income and expenditure. But such surveys were not possible during the pandemic, and the authorities needed to find alternative means to help those in need. Researchers used machine learning to comb through satellite imagery of low-income areas and combined that knowledge with data from mobile-phone networks to find eligible recipients, who then received a regular payment through their phones. Using AI tools in this way was a game-changer for the country.Can AI help beat poverty? Researchers test ways to aid the poorest people

Now, with the pandemic over, researchers and policymakers are continuing to see how AI methods can be used in poverty alleviation. This needs comprehensive and accurate data on the state of poverty in households. For example, to be able to help individual families, authorities need to know about the quality of their housing, their children’s diets, their education and whether families’ basic health and medical needs are being met. This information is typically obtained from in-person surveys. However, researchers have seen a fall in response rates when collecting these data.

Missing data

Gathering survey-based data can be especially challenging in low- and middle-income countries (LMICs). In-person surveys are costly to do and often miss some of the most vulnerable, such as refugees, people living in informal housing or those who earn a living in the cash economy. Some people are reluctant to participate out of fear that there could be harmful consequences — deportation in the case of undocumented migrants, for instance. But unless their needs are identified, it is difficult to help them.Leveraging the collaborative power of AI and citizen science for sustainable development

Could AI offer a solution? The short answer is, yes, although with caveats. The Togo example shows how AI-informed approaches helped communities by combining knowledge of geographical areas of need with more-individual data from mobile phones. It’s a good example of how AI tools work well with granular, household-level data. Researchers are now homing in on a relatively untapped source for such information: data collected by citizen scientists, also known as community scientists. This idea deserves more attention and more funding.

Thanks to technologies such as smartphones, Wi-Fi and 4G, there has been an explosion of people in cities, towns and villages collecting, storing and analysing their own social and environmental data. In Ghana, for example, volunteer researchers are collecting data on marine litter along the coastline and contributing this knowledge to their country’s official statistics…(More)”.