Solferino 21: Warfare, Civilians and Humanitarians in the Twenty-First Century


Book by Hugo Slim: “War is at a tipping point: we’re passing from the age of industrial warfare to a new era of computerised warfare, and a renewed risk of great-power conflict. Humanitarian response is also evolving fast—‘big aid’ demands more and more money, while aid workers try to digitalise, preparing to meet ever-broader needs in the long, big wars and climate crisis of the future. 

This book draws on the founding moment of the modern Red Cross movement—the 1859 Battle of Solferino, a moment of great change in the nature of conflict—to track the big shifts already underway, and still to come, in the wars and war aid of our century. Hugo Slim first surveys the current landscape: the tech, politics, law and strategy of warfare, and the long-term transformations ahead as conflict goes digital. He then explains how civilians both suffer and survive in today’s wars, and how their world is changing. Finally, he critiques today’s humanitarian system, citing the challenges of the 2020s.   

Inspired by Henri Dunant’s seminal humanitarian text, Solferino 21 alerts policymakers to the coming shakeup of the military and aid professions, illuminating key priorities for the new century. Humanitarians, he warns, must adapt or fail….(More)”.

Selected Readings on Digital Self-Determination for Migrants


By Uma Kalkar, Marine Ragnet, and Stefaan Verhulst

Digital self-determination (DSD) is a multidisciplinary concept that extends self-determination to the digital sphere. Self-determination places humans (and their ability to make ‘moral’ decisions) at the center of decision-making actions. While self-determination is considered as a jus cogens rule (i.e. a global norm), the concept of digital self-determination came only to light in the early 2010s as a result of the increasing digitization of most aspects of society. 

While digitalization has opened up new opportunities for self-expression and communication for individuals across the globe, its reach and benefits have not been evenly distributed. For instance, migrants and refugees are particularly vulnerable to the deepening inequalities and power structures brought on by increased digitization, and the subsequent datafication. Further, non-traditional data, such as social media and telecom data, have brought great potential to improve our understanding of the migration experience and patterns of mobility that can provide more targeted migration policies and services yet it also has brought new concerns related to the lack of agency to determine how the data is being used and who determines the migration narrative.

These selected readings look at DSD in light of the growing ubiquity of technology applications and specifically focus on their impacts on migrants. They were produced to inform the first studio on DSD and migration co-hosted by the Big Data for Migration Alliance and the International Digital Self Determination Network. The readings are listed in alphabetical order.

These readings serve as a primer to offer base perspectives on DSD and its manifestations, as well as provide a better understanding of how migration data is managed today to advance or hinder life for those on the move. Please alert us of any other publication we should include moving forward.

Berens, Jos, Nataniel Raymond, Gideon Shimshon, Stefaan Verhulst, and Lucy Bernholz. “The Humanitarian Data Ecosystem: the Case for Collective Responsibility.” Stanford Center for Philanthropy and Civil Society, 2017.

  • The authors explore the challenges to, and potential solutions for, the responsible use of digital data in the context of international humanitarian action. Data governance is related to DSD because it oversees how the information extracted from an individual—understood by DSD as an extension of oneself in the digital sphere—is handled.
  • They argue that in the digital age, the basic service provision activities of NGOs and aid organizations have become data collection processes. However, the ecosystem of actors is “uncoordinated” creating inefficiencies and vulnerabilities in the humanitarian space.
  • The paper presents a new framework for responsible data use in the humanitarian domain. The authors advocate for data users to follow three steps: 
  1. “[L]ook beyond the role they take up in the ‘data-lifecycle’ and consider previous and following steps and roles;
  2. Develop sound data responsibility strategies not only to prevent harm to their own operations but also to other organizations in the ‘data-lifecycle;’ and, 
  3. Collaborate with and learn from other organizations, both in the humanitarian field and beyond, to establish broadly supported guidelines and standards for humanitarian data use.”

Currion, Paul. “The Refugee Identity.Caribou Digital (via Medium), March 13, 2018.

  • Developed as part of a DFID-funded initiative, this essay outlines the Data Requirements for Service Delivery within Refugee Camps project that investigated current data standards and design of refugee identity systems.
  • Currion finds that since “the digitisation of aid has already begun…aid agencies must therefore pay more attention to the way in which identity systems affect the lives and livelihoods of the forcibly displaced, both positively and negatively.” He argues that an interoperable digital identity for refugees is essential to access financial, social, and material resources while on the move but also to tap into IoT services.
  • However, many refugees are wary of digital tracking and data collection services that could further marginalize them as they search for safety. At present, there are no sector-level data standards around refugee identity data collection, combination, and centralization. How can regulators balance data protection with government and NGO requirements to serve refugees in the ways they want to uphold their DSD?
  • Currion argues that a Responsible Data approach, as opposed to a process defined by a Data Minimization principle, provides “useful guidelines” but notes that data responsibility “still needs to be translated into organizational policy, then into institutional processes, and finally into operational practice. He further adds that “the digitization of aid, if approached from a position that empowers the individual as much as the institution, offers a chance to give refugees back their voices.”

Decker, Rianne, Paul Koot, S. Ilker Birbil, Mark van Embden Andres. “Co-designing algorithms for governance: Ensuring responsible and accountable algorithmic management of refugee camp supplies” Big Data and Society, April 2022. 

  • While recent literature has looked at the negative impacts of big data and algorithms in public governance, claiming they may reinforce existing biases and defy scrutiny by public officials, this paper argues that designing algorithms with relevant government and society stakeholders might be a way to make them more accountable and transparent. 
  • It presents a case study of the development of an algorithmic tool to estimate the populations of refugee camps to manage the delivery of emergency supplies. The algorithms included in this tool were co-designed with relevant stakeholders. 
  • This may provide a way to uphold DSD by  contributing to the “accountability of the algorithm by making the estimations transparent and explicable to its users.”
  • The authors found that the co-design process enabled better accuracy and responsibility and fostered collaboration between partners, creating a suitable purpose for the tool and making the algorithm understandable to its users. This enabled algorithmic accountability. 
  • The authors note, however, that the beneficiaries of the tools were not included in the design process, limiting the legitimacy of the initiative. 

European Migration Network. “The Use of Digitalisation and Artificial Intelligence in Migration Management.” EMN-OECD Inform Series, February 2022.

  • This paper explores the role of new digital technologies in the management of migration and asylum, focusing specifically on where digital technologies, such as online portals, blockchain, and AI-powered speech and facial recognition systems are being used across Europe to navigate the processes of obtaining visas, claiming asylum, gaining citizenship,  and deploying border control management. 
  • Further, it points to friction between GDPR and new technologies like blockchain—which by decision does not allow for the right to be forgotten—and potential workarounds, such as two-step pseudonymisation.
  • As well, it highlights steps taken to oversee and open up data protection processes for immigration. Austria, Belgium, and France have begun to conduct Data Protection Impact Assessments; France has a portal that allows one to request the right to be forgotten; Ireland informs online service users on how data can be shared or used with third-party agencies; and Spain outlines which personal data are used in immigration as per the Registry Public Treatment Activities.
  • Lastly, the paper points out next steps for policy development that upholds DSD, including universal access and digital literacy, trust in digital systems, willingness for government digital transformations, and bias and risk reduction.

Martin, Aaron, Gargi Sharma, Siddharth Peter de Souza, Linnet Taylor, Boudewijn van Eerd, Sean Martin McDonald, Massimo Marelli, Margie Cheesman, Stephan Scheel, and Huub Dijstelbloem. “Digitisation and Sovereignty in Humanitarian Space: Technologies, Territories and Tensions.” Geopolitics (2022): 1-36.

  • This paper explores how digitisation and datafication are reshaping sovereign authority, power, and control in humanitarian spaces.
  • Building on the notion that technology is political, Martin et al. discuss three cases where digital tools powered by partnerships between international organizations and NGOs and private firms such as Palantir and Facebook have raised concerns for data to be “repurposed” to undermine national sovereignty and distort humanitarian aims with for-profit motivations.
  • The authors draw attention to how cyber dependencies threaten international humanitarian organizations’ purported digital sovereignty. They touch on the tensions between national and digital sovereignty and self-governance.
  • The paper further argues that the rise of digital technologies in the governance of international mobility and migration policies “has all kinds of humanitarian and security consequences,” including (but not limited to) surveillance, privacy infringement, profiling, selection, inclusion/exclusion, and access barriers. Specifically, Scheel introduces the notion of function creep—the use of digital data beyond initially defined purposes—and emphasizes its common use in the context of migration as part “of the modus operandi of sovereign power.”

McAuliffe, Marie, Jenna Blower, and Ana Beduschi. “Digitalization and Artificial Intelligence in Migration and Mobility: Transnational Implications of the COVID-19 Pandemic.” Societies 11, no. 135 (2021): 1-13.

  • This paper critically examines the implications of intensifying digitalization and AI for migration and mobility systems in a post- COVID transnational context. 
  • The authors first situate digitalization and AI in migration by analyzing its uptake throughout the Migration Cycle, i.e. to verify identities and visas, “enable “smart” border processing,” and understand travelers’ adherence to legal frameworks. It then evaluates the current challenges and opportunities to migrants and migration systems brought about by deepening digitalization due to COVID-19. For example, contact tracing, infection screening, and quarantining procedures generate increased data about an individual and are meant, by design, to track and trace people, which raises concerns about migrants’ safety, privacy, and autonomy.
  • This essay argues that recent changes show the need for further computational advances that incorporate human rights throughout the design and development stages, “to mitigate potential risks to migrants’ human rights.” AI is severely flawed when it comes to decision-making around minority groups because of biased training data and could further marginalize vulnerable populations and intrusive data collection for public health could erode the power of one’s universal right to privacy. Leaving migrants at the mercy of black-box AI systems fails to uphold their right to DSD because it forces them to relinquish their agency and power to an opaque system.

Ponzanesi, Sandra. “Migration and Mobility in a Digital Age: (Re)Mapping Connectivity and Belonging.” Television & New Media 20, no. 6 (2019): 547-557.

  • This article explores the role of new media technologies in rethinking the dynamics of migration and globalization by focusing on the role of migrant users as “connected” and active participants, as well as “screened” and subject to biometric datafication, visualization, and surveillance.
  • Elaborating on concepts such as “migration” and “mobility,” the article analyzes the paradoxes of intermittent connectivity and troubled belonging, which are seen as relational definitions that are always fluid, negotiable, and porous.
  • It states that a city’s digital infrastructures are “complex sociotechnical systems” that have a functional side related to access and connectivity and a performative side where people engage with technology. Digital access and action represent areas of individual and collective manifestations of DSD. For migrants, gaining digital access and skills and “enacting citizenship” are important for resettlement. Ponzanesi advocates for further research conducted both from the bottom-up that leans on migrant experiences with technology to resettle and remain in contact with their homeland and a top-down approach that looks at datafication, surveillance, digital/e-governance as a part of the larger technology application ecosystem to understand contemporary processes and problems of migration.

Remolina, Nydia, and Mark James Findlay. “The Paths to Digital Self-Determination — A Foundational Theoretical Framework.” SMU Centre for AI & Data Governance Research Paper No. 03 (2021): 1-34.

  • Remolina and Findlay stress that self-determination is the vehicle by which people “decide their own destiny in the international order.” Decision-making ability powers humans to be in control of their own lives and excited to pursue a set of actions. Collective action, or the ability to make decisions as a part of a group—be it based on ethnicity, nationality, shared viewpoints, etc.—further motivates oneself.
  • The authors discuss how the European Union and European Court of Human Rights’ “principle of subsidiarity” aligns with self-determination because it advocates for power to be placed at the lowest level possible to preserve bottom-up agency with a “reasonable level of efficiency.” In practice, the results of subsidiarity have been disappointing.
  • The paper provides examples of indigenous populations’ fight for self-determination, offline and online. Here, digital self-determination refers to the challenges indigenous peoples face in accessing growing government uses of technology for unlocking innovative solutions because of a lack of physical infrastructure due to structural and social inequities between settler and indigenous communities.
  • Understanding self-determination—and by extension, digital self-determination as a human right, the report investigates how autonomy, sovereignty, the legal definition of a ‘right,’ inclusion, agency, data governance, data ownership, data control, and data quality.
  • Lastly, the paper presents a foundational theoretical framework that goes beyond just protecting personal data and privacy. Understanding that DSD “cannot be detached from duties for responsible data use,” the authors present a collective and individual dimension to DSD. They extend the individual dimension of DSD to include both my data and data about me that can be used to influence a person’s actions through micro-targeting and nudge techniques. They update the collective dimension of DSD to include the views and influences of organizations, businesses, and communities online and call for a better way of visualizing the ‘social self’ and its control over data.

Ziebart, Astrid, and Jessica Bither. “AI, Digital Identities, Biometrics, Blockchain: A Primer on the Use of Technology in Migration Management.” Migration Strategy Group on International Cooperation and Development, June 2020.

  • Ziebart and Bither note the implications of increasingly sophisticated use of technology and data collection by governments with respect to their citizens. They note that migrants and refugees “often are exposed to particular vulnerabilities” during these processes and underscore the need to bring migrants into data gathering and use policy conversations.  
  • The authors discuss the promise of technology—i.e., to predict migration through AI-powered analyses, employ technologies to reduce friction in the asylum-seeking processes, and the power of digital identities for those on the move. However, they stress the need to combine these tools with informational self-determination that allows migrants to own and control what data they share and how and where the data are used.
  • The migration and refugee policy space faces issues of “tech evangelism,” where technologies are being employed just because they exist, rather than because they serve an actual policy need or provide an answer to a particular policy question. This supply-driven policy implementation signals the need for more migrant voices to inform policymakers on what tools are actually useful for the migratory experience. In order to advance the digital agency of migrants, the paper offers recommendations for some of the ethical challenges these technologies might pose and ultimately advocates for greater participation of migrants and refugees in devising technology-driven policy instruments for migration issues.

On-the-go interesting resources 

  • Empowering Digital Self-Determination, mediaX at Stanford University: This short video presents definitions of DSD, and digital personhood, identity, and privacy and an overview of their applications across ethics, law, and the private sector.
  • Digital Self-Determination — A Living Syllabus: This syllabus and assorted materials have been created and curated from the 2021 Research Sprint run by the Digital Asia Hub and Berkman Klein Center for Internet Society at Harvard University. It introduces learners to the fundamentals of DSD across a variety of industries to enrich understanding of its existing and potential applications.
  • Digital Self-Determination Wikipedia Page: This Wikipedia page was developed by the students who took part in the Berkman Klein Center research sprint on digital self-determination. It provides a comprehensive overview of DSD definitions and its key elements, which include human-centered design, robust privacy mandates and data governance, and control over data use to give data subjects the ability to choose how algorithms manipulate their data for autonomous decision-making.
  • Roger Dubach on Digital Self-Determination: This short video presents DSD in the public sector and the dangers of creating a ‘data-protected’ world, but rather on understanding how governments can efficiently use data and protect privacy. Note: this video is part of the Living Syllabus course materials (Digital Self-Determination/Module 1: Beginning Inquiries).

Co-designing algorithms for governance: Ensuring responsible and accountable algorithmic management of refugee camp supplies


Paper by Rianne Dekker et al: “There is increasing criticism on the use of big data and algorithms in public governance. Studies revealed that algorithms may reinforce existing biases and defy scrutiny by public officials using them and citizens subject to algorithmic decisions and services. In response, scholars have called for more algorithmic transparency and regulation. These are useful, but ex post solutions in which the development of algorithms remains a rather autonomous process. This paper argues that co-design of algorithms with relevant stakeholders from government and society is another means to achieve responsible and accountable algorithms that is largely overlooked in the literature. We present a case study of the development of an algorithmic tool to estimate the populations of refugee camps to manage the delivery of emergency supplies. This case study demonstrates how in different stages of development of the tool—data selection and pre-processing, training of the algorithm and post-processing and adoption—inclusion of knowledge from the field led to changes to the algorithm. Co-design supported responsibility of the algorithm in the selection of big data sources and in preventing reinforcement of biases. It contributed to accountability of the algorithm by making the estimations transparent and explicable to its users. They were able to use the tool for fitting purposes and used their discretion in the interpretation of the results. It is yet unclear whether this eventually led to better servicing of refugee camps…(More)”.

Google is using AI to better detect searches from people in crisis


Article by James Vincent: “In a personal crisis, many people turn to an impersonal source of support: Google. Every day, the company fields searches on topics like suicide, sexual assault, and domestic abuse. But Google wants to do more to direct people to the information they need, and says new AI techniques that better parse the complexities of language are helping.

Specifically, Google is integrating its latest machine learning model, MUM, into its search engine to “more accurately detect a wider range of personal crisis searches.” The company unveiled MUM at its IO conference last year, and has since used it to augment search with features that try to answer questions connected to the original search.

In this case, MUM will be able to spot search queries related to difficult personal situations that earlier search tools could not, says Anne Merritt, a Google product manager for health and information quality.

“MUM is able to help us understand longer or more complex queries like ‘why did he attack me when i said i dont love him,’” Merrit told The Verge. “It may be obvious to humans that this query is about domestic violence, but long, natural-language queries like these are difficult for our systems to understand without advanced AI.”

Other examples of queries that MUM can react to include “most common ways suicide is completed” (a search Merrit says earlier systems “may have previously understood as information seeking”) and “Sydney suicide hot spots” (where, again, earlier responses would have likely returned travel information — ignoring the mention of “suicide” in favor of the more popular query for “hot spots”). When Google detects such crisis searches, it responds with an information box telling users “Help is available,” usually accompanied by a phone number or website for a mental health charity like Samaritans.

In addition to using MUM to respond to personal crises, Google says it’s also using an older AI language model, BERT, to better identify searches looking for explicit content like pornography. By leveraging BERT, Google says it’s “reduced unexpected shocking results by 30%” year-on-year. However, the company was unable to share absolute figures for how many “shocking results” its users come across on average, so while this is a comparative improvement, it gives no indication of how big or small the problem actually is.

Google is keen to tell you that AI is helping the company improve its search products — especially at a time when there’s a building narrative that “Google search is dying.” But integrating this technology comes with its downsides, too.

Many AI experts warn that Google’s increasing use of machine learning language models could surface new problems for the company, like introducing biases and misinformation into search results. AI systems are also opaque, offering engineers restricted insight into how they come to certain conclusions…(More)”.

Machine learning and phone data can improve targeting of humanitarian aid


Paper by Emily Aiken, Suzanne Bellue, Dean Karlan, Chris Udry & Joshua E. Blumenstock: “The COVID-19 pandemic has devastated many low- and middle-income countries, causing widespread food insecurity and a sharp decline in living standards. In response to this crisis, governments and humanitarian organizations worldwide have distributed social assistance to more than 1.5 billion people. Targeting is a central challenge in administering these programmes: it remains a difficult task to rapidly identify those with the greatest need given available data. Here we show that data from mobile phone networks can improve the targeting of humanitarian assistance. Our approach uses traditional survey data to train machine-learning algorithms to recognize patterns of poverty in mobile phone data; the trained algorithms can then prioritize aid to the poorest mobile subscribers. We evaluate this approach by studying a flagship emergency cash transfer program in Togo, which used these algorithms to disburse millions of US dollars worth of COVID-19 relief aid. Our analysis compares outcomes—including exclusion errors, total social welfare and measures of fairness—under different targeting regimes. Relative to the geographic targeting options considered by the Government of Togo, the machine-learning approach reduces errors of exclusion by 4–21%. Relative to methods requiring a comprehensive social registry (a hypothetical exercise; no such registry exists in Togo), the machine-learning approach increases exclusion errors by 9–35%. These results highlight the potential for new data sources to complement traditional methods for targeting humanitarian assistance, particularly in crisis settings in which traditional data are missing or out of date…(More)”.

Cities4Cities: new matchmaking platform launched to support Ukrainian local and regional authorities


Council of Europe: “A new matchmaking online platform, Cities4Cities, developed to help Ukrainian cities was launched in Strasbourg today. The platform is a free online exchange tool; it allows local authorities in Ukraine and in the rest of Europe to share their needs and offers related to local infrastructure and get in direct contact to receive practical help.

The platform was launched at the initiative of Bernd Vöhringer (Germany, EPP/CCE), President of the Chamber of Local Authorities of the Congress of Local and Regional Authorities and Mayor of the city of Sindelfingen, with the support of the Congress of Local and Regional Authorities of the Council of Europe.

Bernd Vöhringer explained that the need for co-ordination of support action coming from the local level became very clear to him after the visit in the end of March to the Polish twin city of Sindelfingen, Chełm, situated near the Ukrainian border where he saw first-hand the “urgent need for material, financial and human resources support”. “The platform will be a place to match the demands/needs of Ukrainian cities with the capacity, know-how and supply of other European cities,” he noted, “It will enable faster and more efficient support to our Ukrainian friends and partners”.

Secretary General of the Congress, Andreas Kiefer, said that the Congress “welcomes the efforts of local and regional authorities of the member States of the Council of Europe and their associations in support for their Ukrainian counterparts and citizens”, and the Cities4Cities initiative is an example of such result-oriented solidarity action at the local level. “In the recently adopted Declaration the Congress stressed that democracy, multilevel governance and human rights are stronger than war, and reiterated its firm stand by Ukraine and its people”, Kiefer concluded.

Ambassador Borys Tarasyuk, Permanent Representative of Ukraine to the Council of Europe, stressed that the initiative will serve well the purpose of providing practical assistance to the most vulnerable, amidst the immense human tragedy and challenges, and will complement the political support and solidarity expressed by the Congress of Local and Regional Authorities and the Council of Europe as a whole…(More)”.

Open Data for Social Impact Framework


Framework by Microsoft: “The global pandemic has shown us the important role of data in understanding, assessing, and taking action to solve the challenges created by COVID-19. However, nearly all organizations, large and small, still struggle to make data relevant to their work. Despite the value data provides, many organizations fail to harness its power to improve outcomes.

Part of this struggle stems from the “data divide” – the gap that exists between countries and organizations that have effective access to data to help them innovate and solve problems and those that do not. To close this divide, Microsoft launched the Open Data Campaign in 2020 to help realize the promise of more open data and data collaborations that drive innovation.

One of the key lessons we’ve learned from the Campaign and the work we’ve been doing with our partners, the Open Data Institute and The GovLab, is that the ability to access and use data to improve outcomes involves much more than technological tools and the data itself. It is also important to be able to leverage and share the experiences and practices that promote effective data collaboration and decision-making. This is especially true when it comes to working with governments, multi-lateral organizations, nonprofits, research institutions, and others who seek to open and reuse data to address important social issues, particularly those faced by developing countries.

Put another way, just having access to data and technology does not magically create value and improve outcomes. Making the most of open data and data collaboration requires thinking about how an organization’s leadership can commit to making data useful towards its mission, defining the questions it wants to answer with data, identifying the skills its team needs to use data, and determining how best to develop and establish trust among collaborators and communities served to derive more insight and benefit from data.

The Open Data for Social Impact Framework is a tool leaders can use to put data to work to solve the challenges most important to them. Recognizing that not all data can be made publicly accessible, we see the tremendous benefits that can come from advancing more open data, whether that takes shape as trusted data collaborations or truly open and public data. We use the phrase ‘social impact’ to mean a positive change towards addressing a societal problem, such as reducing carbon emissions, closing the broadband gap, building skills for jobs, and advancing accessibility and inclusion.

We believe in the limitless opportunities that opening, sharing, and collaborating around data can create to draw out new insights, make better decisions, and improve efficiencies when tackling some of the world’s most pressing challenges….(More)”.

Digitisation and Sovereignty in Humanitarian Space: Technologies, Territories and Tensions


Paper by Aaron Martin: “Debates are ongoing on the limits of – and possibilities for – sovereignty in the digital era. While most observers spotlight the implications of the Internet, cryptocurrencies, artificial intelligence/machine learning and advanced data analytics for the sovereignty of nation states, a critical yet under-examined question concerns what digital innovations mean for authority, power and control in the humanitarian sphere in which different rules, values and expectations are thought to apply. This forum brings together practitioners and scholars to explore both conceptually and empirically how digitisation and datafication in aid are (re)shaping notions of sovereign power in humanitarian space. The forum’s contributors challenge established understandings of sovereignty in new forms of digital humanitarian action. Among other focus areas, the forum draws attention to how cyber dependencies threaten international humanitarian organisations’ purported digital sovereignty. It also contests the potential of technologies like blockchain to revolutionise notions of sovereignty in humanitarian assistance and hypothesises about the ineluctable parasitic qualities of humanitarian technology. The forum concludes by proposing that digital technologies deployed in migration contexts might be understood as ‘sovereignty experiments’. We invite readers from scholarly, policy and practitioner communities alike to engage closely with these critical perspectives on digitisation and sovereignty in humanitarian space….(More)”.

‘It’s like the wild west’: Data security in frontline aid


A Q&A on how aid workers handle sensitive data by Irwin Loy: “The cyber-attack on the International Committee of the Red Cross, discovered in January, was the latest high-profile breach to connect the dots between humanitarian data risks and real-world harms. Personal information belonging to more than 515,000 people was exposed in what the ICRC said was a “highly sophisticated” hack using tools employed mainly by states or state-backed groups.

But there are countless other examples of how the reams of data collected from some of the world’s most vulnerable communities can be compromisedmisused, and mishandled.

“The biggest frontier in the humanitarian sector is the weaponisation of humanitarian data,” said Olivia Williams, a former aid worker who now specialises in information security at Apache iX, a UK-based defence consultancy.

She recently completed research – including surveys and interviews with more than 180 aid workers from 28 countries – examining how data is handled, and what agencies and frontline staff say they do to protect it.

Sensitive data is often collected on personal devices, sent over hotel WiFi, scrawled on scraps of paper then photographed and sent to headquarters via WhatsApp, or simply emailed and widely shared with partner organisations, aid workers told her.

The organisational security and privacy policies meant to guide how data is stored and protected? Impractical, irrelevant, and often ignored, Williams said.

Some frontline staff are taking information security into their own hands, devising their own systems of coding, filing, and securing data. One respondent kept paper files locked in their bedroom.

Aid workers from dozens of major UN agencies, NGOs, Red Cross organisations, and civil society groups took part in the survey.

Williams’ findings echo her own misgivings about data security in her previous deployments to crisis zones from northern Iraq to Nepal and the Philippines. Aid workers are increasingly alarmed about how data is handled, she said, while their employers are largely “oblivious” to what actually happens on the ground.

Williams spoke to The New Humanitarian about the unspoken power imbalance in data collection, why there’s so much data, and what aid workers can do to better protect it….(More)”.

New and updated building footprints


Bing Blogs: “…The Microsoft Maps Team has been leveraging that investment to identify map features at scale and produce high-quality building footprint data sets with the overall goal to add to the OpenStreetMap and MissingMaps humanitarian efforts.

As of this post, the following locations are available and Microsoft offers access to this data under the Open Data Commons Open Database License (ODbL).

Country/RegionMillion buildings
United States of America129.6
Nigeria and Kenya50.5
South America44.5
Uganda and Tanzania17.9
Canada11.8
Australia11.3

As you might expect, the vintage of the footprints depends on the collection date of the underlying imagery. Bing Maps Imagery is a composite of multiple sources with different capture dates (ranging 2012 to 2021). To ensure we are setting the right expectation for that building, each footprint has a capture date tag associated if we could deduce the vintage of imagery used…(More)”