Digitisation and Sovereignty in Humanitarian Space: Technologies, Territories and Tensions


Paper by Aaron Martin: “Debates are ongoing on the limits of – and possibilities for – sovereignty in the digital era. While most observers spotlight the implications of the Internet, cryptocurrencies, artificial intelligence/machine learning and advanced data analytics for the sovereignty of nation states, a critical yet under-examined question concerns what digital innovations mean for authority, power and control in the humanitarian sphere in which different rules, values and expectations are thought to apply. This forum brings together practitioners and scholars to explore both conceptually and empirically how digitisation and datafication in aid are (re)shaping notions of sovereign power in humanitarian space. The forum’s contributors challenge established understandings of sovereignty in new forms of digital humanitarian action. Among other focus areas, the forum draws attention to how cyber dependencies threaten international humanitarian organisations’ purported digital sovereignty. It also contests the potential of technologies like blockchain to revolutionise notions of sovereignty in humanitarian assistance and hypothesises about the ineluctable parasitic qualities of humanitarian technology. The forum concludes by proposing that digital technologies deployed in migration contexts might be understood as ‘sovereignty experiments’. We invite readers from scholarly, policy and practitioner communities alike to engage closely with these critical perspectives on digitisation and sovereignty in humanitarian space….(More)”.

‘It’s like the wild west’: Data security in frontline aid


A Q&A on how aid workers handle sensitive data by Irwin Loy: “The cyber-attack on the International Committee of the Red Cross, discovered in January, was the latest high-profile breach to connect the dots between humanitarian data risks and real-world harms. Personal information belonging to more than 515,000 people was exposed in what the ICRC said was a “highly sophisticated” hack using tools employed mainly by states or state-backed groups.

But there are countless other examples of how the reams of data collected from some of the world’s most vulnerable communities can be compromisedmisused, and mishandled.

“The biggest frontier in the humanitarian sector is the weaponisation of humanitarian data,” said Olivia Williams, a former aid worker who now specialises in information security at Apache iX, a UK-based defence consultancy.

She recently completed research – including surveys and interviews with more than 180 aid workers from 28 countries – examining how data is handled, and what agencies and frontline staff say they do to protect it.

Sensitive data is often collected on personal devices, sent over hotel WiFi, scrawled on scraps of paper then photographed and sent to headquarters via WhatsApp, or simply emailed and widely shared with partner organisations, aid workers told her.

The organisational security and privacy policies meant to guide how data is stored and protected? Impractical, irrelevant, and often ignored, Williams said.

Some frontline staff are taking information security into their own hands, devising their own systems of coding, filing, and securing data. One respondent kept paper files locked in their bedroom.

Aid workers from dozens of major UN agencies, NGOs, Red Cross organisations, and civil society groups took part in the survey.

Williams’ findings echo her own misgivings about data security in her previous deployments to crisis zones from northern Iraq to Nepal and the Philippines. Aid workers are increasingly alarmed about how data is handled, she said, while their employers are largely “oblivious” to what actually happens on the ground.

Williams spoke to The New Humanitarian about the unspoken power imbalance in data collection, why there’s so much data, and what aid workers can do to better protect it….(More)”.

New and updated building footprints


Bing Blogs: “…The Microsoft Maps Team has been leveraging that investment to identify map features at scale and produce high-quality building footprint data sets with the overall goal to add to the OpenStreetMap and MissingMaps humanitarian efforts.

As of this post, the following locations are available and Microsoft offers access to this data under the Open Data Commons Open Database License (ODbL).

Country/RegionMillion buildings
United States of America129.6
Nigeria and Kenya50.5
South America44.5
Uganda and Tanzania17.9
Canada11.8
Australia11.3

As you might expect, the vintage of the footprints depends on the collection date of the underlying imagery. Bing Maps Imagery is a composite of multiple sources with different capture dates (ranging 2012 to 2021). To ensure we are setting the right expectation for that building, each footprint has a capture date tag associated if we could deduce the vintage of imagery used…(More)”

Data Re-Use and Collaboration for Development


Stefaan G. Verhulst at Data & Policy: “It is often pointed out that we live in an era of unprecedented data, and that data holds great promise for development. Yet equally often overlooked is the fact that, as in so many domains, there exist tremendous inequalities and asymmetries in where this data is generated, and how it is accessed. The gap that separates high-income from low-income countries is among the most important (or at least most persistent) of these asymmetries…

Data collaboratives are an emerging form of public-private partnership that, when designed responsibly, can offer a potentially innovative solution to this problem. Data collaboratives offer at least three key benefits for developing countries:

1. Cost Efficiencies: Data and data analytic capacity are often hugely expensive and beyond the limited capacities of many low-income countries. Data reuse, facilitated by data collaboratives, can bring down the cost of data initiatives for development projects.

2. Fresh insights for better policy: Combining data from various sources by breaking down silos has the potential to lead to new and innovative insights that can help policy makers make better decisions. Digital data can also be triangulated with existing, more traditional sources of information (e.g., census data) to generate new insights and help verify the accuracy of information.

3. Overcoming inequalities and asymmetries: Social and economic inequalities, both within and among countries, are often mapped onto data inequalities. Data collaboratives can help ease some of these inequalities and asymmetries, for example by allowing costs and analytical tools and techniques to be pooled. Cloud computing, which allows information and technical tools to be easily shared and accessed, are an important example. They can play a vital role in enabling the transfer of skills and technologies between low-income and high-income countries…(More)”. See also: Reusing data responsibly to achieve development goals (OECD Report).

Making data for good better


Article by Caroline Buckee, Satchit Balsari, and Andrew Schroeder: “…Despite the long standing excitement about the potential for digital tools, Big Data and AI to transform our lives, these innovations–with some exceptions–have so far had little impact on the greatest public health emergency of our time.

Attempts to use digital data streams to rapidly produce public health insights that were not only relevant for local contexts in cities and countries around the world, but also available to decision makers who needed them, exposed enormous gaps across the translational pipeline. The insights from novel data streams which could help drive precise, impactful health programs, and bring effective aid to communities, found limited use among public health and emergency response systems. We share here our experience from the COVID-19 Mobility Data Network (CMDN), now Crisis Ready (crisisready.io), a global collaboration of researchers, mostly infectious disease epidemiologists and data scientists, who served as trusted intermediaries between technology companies willing to share vast amounts of digital data, and policy makers, struggling to incorporate insights from these novel data streams into their decision making. Through our experience with the Network, and using human mobility data as an illustrative example, we recognize three sets of barriers to the successful application of large digital datasets for public good.

First, in the absence of pre-established working relationships with technology companies and data brokers, the data remain primarily confined within private circuits of ownership and control. During the pandemic, data sharing agreements between large technology companies and researchers were hastily cobbled together, often without the right kind of domain expertise in the mix. Second, the lack of standardization, interoperability and information on the uncertainty and biases associated with these data, necessitated complex analytical processing by highly specialized domain experts. And finally, local public health departments, understandably unfamiliar with these novel data streams, had neither the bandwidth nor the expertise to sift noise from signal. Ultimately, most efforts did not yield consistently useful information for decision making, particularly in low resource settings, where capacity limitations in the public sector are most acute…(More)”.

Nonprofit Websites Are Riddled With Ad Trackers


Article by By Alfred Ng and Maddy Varner: “Last year, nearly 200 million people visited the website of Planned Parenthood, a nonprofit that many people turn to for very private matters like sex education, access to contraceptives, and access to abortions. What those visitors may not have known is that as soon as they opened plannedparenthood.org, some two dozen ad trackers embedded in the site alerted a slew of companies whose business is not reproductive freedom but gathering, selling, and using browsing data.

The Markup ran Planned Parenthood’s website through our Blacklight tool and found 28 ad trackers and 40 third-party cookies tracking visitors, in addition to so-called “session recorders” that could be capturing the mouse movements and keystrokes of people visiting the homepage in search of things like information on contraceptives and abortions. The site also contained trackers that tell Facebook and Google if users visited the site.

The Markup’s scan found Planned Parenthood’s site communicating with companies like Oracle, Verizon, LiveRamp, TowerData, and Quantcast—some of which have made a business of assembling and selling access to masses of digital data about people’s habits.

Katie Skibinski, vice president for digital products at Planned Parenthood, said the data collected on its website is “used only for internal purposes by Planned Parenthood and our affiliates,” and the company doesn’t “sell” data to third parties.

“While we aim to use data to learn how we can be most impactful, at Planned Parenthood, data-driven learning is always thoughtfully executed with respect for patient and user privacy,” Skibinski said. “This means using analytics platforms to collect aggregate data to gather insights and identify trends that help us improve our digital programs.”

Skibinski did not dispute that the organization shares data with third parties, including data brokers.

Blacklight scan of Planned Parenthood Gulf Coast—a localized website specifically for people in the Gulf region, including Texas, where abortion has been essentially outlawed—churned up similar results.

Planned Parenthood is not alone when it comes to nonprofits, some operating in sensitive areas like mental health and addiction, gathering and sharing data on website visitors.

Using our Blacklight tool, The Markup scanned more than 23,000 websites of nonprofit organizations, including those belonging to abortion providers and nonprofit addiction treatment centers. The Markup used the IRS’s nonprofit master file to identify nonprofits that have filed a tax return since 2019 and that the agency categorizes as focusing on areas like mental health and crisis intervention, civil rights, and medical research. We then examined each nonprofit’s website as publicly listed in GuideStar. We found that about 86 percent of them had third-party cookies or tracking network requests. By comparison, when The Markup did a survey of the top 80,000 websites in 2020, we found 87 percent used some type of third-party tracking.

About 11 percent of the 23,856 nonprofit websites we scanned had a Facebook pixel embedded, while 18 percent used the Google Analytics “Remarketing Audiences” feature.

The Markup found that 439 of the nonprofit websites loaded scripts called session recorders, which can monitor visitors’ clicks and keystrokes. Eighty-nine of those were for websites that belonged to nonprofits that the IRS categorizes as primarily focusing on mental health and crisis intervention issues…(More)”.

Data Science for Social Good: Philanthropy and Social Impact in a Complex World


Book edited by Ciro Cattuto and Massimo Lapucci: “This book is a collection of insights by thought leaders at first-mover organizations in the emerging field of “Data Science for Social Good”. It examines the application of knowledge from computer science, complex systems, and computational social science to challenges such as humanitarian response, public health, and sustainable development. The book provides an overview of scientific approaches to social impact – identifying a social need, targeting an intervention, measuring impact – and the complementary perspective of funders and philanthropies pushing forward this new sector.

TABLE OF CONTENTS


Introduction; By Massimo Lapucci

The Value of Data and Data Collaboratives for Good: A Roadmap for Philanthropies to Facilitate Systems Change Through Data; By Stefaan G. Verhulst

UN Global Pulse: A UN Innovation Initiative with a Multiplier Effect; By Dr. Paula Hidalgo-Sanchis

Building the Field of Data for Good; By Claudia Juech

When Philanthropy Meets Data Science: A Framework for Governance to Achieve Data-Driven Decision-Making for Public Good; By Nuria Oliver

Data for Good: Unlocking Privately-Held Data to the Benefit of the Many; By Alberto Alemanno

Building a Funding Data Ecosystem: Grantmaking in the UK; By Rachel Rank

A Reflection on the Role of Data for Health: COVID-19 and Beyond; By Stefan E. Germann and Ursula Jasper….(More)”

Impact Evidence and Beyond: Using Evidence to Drive Adoption of Humanitarian Innovations


Learning paper by DevLearn: “…provides guidance to humanitarian innovators on how to use evidence to enable and drive adoption of innovation.

Innovation literature and practice show time and time again that it is difficult to scale innovations. Even when an innovation is demonstrably impactful, better than the existing solution and good value for money, it does not automatically get adopted or used in mainstream humanitarian programming.

Why do evidence-based innovations face difficulties in scaling and how can innovators best position their innovation to scale?

This learning paper is for innovators who want to effectively use evidence to support and enable their journey to scale. It explores the underlying social, organisational and behavioural factors that stifle uptake of innovations.

It also provides guidance on how to use, prioritise and communicate evidence to overcome these barriers. The paper aims to help innovators generate and present their evidence in more tailored and nuanced ways to improve adoption and scaling of their innovations….(More)”.

Where Is Everyone? The Importance of Population Density Data


Data Artefact Study by Aditi Ramesh, Stefaan Verhulst, Andrew Young and Andrew Zahuranec: “In this paper, we explore new and traditional approaches to measuring population density, and ways in which density information has frequently been used by humanitarian, private-sector and government actors to advance a range of private and public goals. We explain how new innovations are leading to fresh ways of collecting data—and fresh forms of data—and how this may open up new avenues for using density information in a variety of contexts. Section III examines one particular example: Facebook’s High-Resolution Population Density Maps (also referred to as HRSL, or high resolution settlement layer). This recent initiative, created in collaboration with a number of external organizations, shows not only the potential of mapping innovations but also the potential benefits of inter-sectoral partnerships and sharing. We examine three particular use cases of HRSL, and we follow with an assessment and some lessons learned. These lessons are applicable to HRSL in particular, but also more broadly. We conclude with some thoughts on avenues for future research….(More)”.

Introducing collective crisis intelligence


Blogpost by Annemarie Poorterman et al: “…It has been estimated that over 600,000 Syrians have been killed since the start of the civil war, including tens of thousands of civilians killed in airstrike attacks. Predicting where and when strikes will occur and issuing time-critical warnings enabling civilians to seek safety is an ongoing challenge. It was this problem that motivated the development of Sentry Syria, an early warning system that alerts citizens to a possible airstrike. Sentry uses acoustic sensor data, reports from on-the-ground volunteers, and open media ‘scraping’ to detect warplanes in flight. It uses historical data and AI to validate the information from these different data sources and then issues warnings to civilians 5-10 minutes in advance of a strike via social media, TV, radio and sirens. These extra minutes can be the difference between life and death.

Sentry Syria is just one example of an emerging approach in the humanitarian response we call collective crisis intelligence (CCI). CCI methods combine the collective intelligence (CI) of local community actors (e.g. volunteer plane spotters in the case of Sentry) with a wide range of additional data sources, artificial intelligence (AI) and predictive analytics to support crisis management and reduce the devastating impacts of humanitarian emergencies….(More)”