Digital contact tracing and surveillance during COVID-19

Report on General and Child-specific Ethical Issues by Gabrielle Berman, Karen Carter, Manuel García-Herranz and Vedran Sekara: “The last few years have seen a proliferation of means and approaches being used to collect sensitive or identifiable data on children. Technologies such as facial recognition and other biometrics, increased processing capacity for ‘big data’ analysis and data linkage, and the roll-out of mobile and internet services and access have substantially changed the nature of data collection, analysis, and use.

Real-time data are essential to support decision-makers in government, development and humanitarian agencies such as UNICEF to better understand the issues facing children, plan appropriate action, monitor progress and ensure that no one is left behind. But the collation and use of personally identifiable data may also pose significant risks to children’s rights.

UNICEF has undertaken substantial work to provide a foundation to understand and balance the potential benefits and risks to children of data collection. This work includes the Industry Toolkit on Children’s Online Privacy and Freedom of Expression and a partnership with GovLab on Responsible Data for Children (RD4C) – which promotes good practice principles and has developed practical tools to assist field offices, partners and governments to make responsible data management decisions.

Balancing the need to collect data to support good decision-making versus the need to protect children from harm created through the collection of the data has never been more challenging than in the context of the global COVID-19 pandemic. The response to the pandemic has seen an unprecedented rapid scaling up of technologies to support digital contact tracing and surveillance. The initial approach has included:

  • tracking using mobile phones and other digital devices (tablet computers, the Internet of Things, etc.)
  • surveillance to support movement restrictions, including through the use of location monitoring and facial recognition
  • a shift from in-person service provision and routine data collection to the use of remote or online platforms (including new processes for identity verification)
  • an increased focus on big data analysis and predictive modelling to fill data gaps…(More)”.

An introduction to human rights for the mobile sector

Report by the GSMA: “Human rights risks are present throughout mobile operators’ value chains. These range from the treatment and conditions of people working in the supply chain to how operators’ own employees are treated and how the human rights of customers are respected online.

This summary provides a high-level introduction to the most salient human rights issues for mobile operators. The aim is to explain why the issues are relevant for operators and share initial practical guidance for companies beginning to focus and respond to human rights issues….(More)”.

How Humanitarian Blockchain Can Deliver Fair Labor to Global Supply Chains

Paper by  Ashley Mehra and John G. Dale: “Blockchain technology in global supply chains has proven most useful as a tool for storing and keeping records of information or facilitating payments with increased efficiency. The use of blockchain to improve supply chains for humanitarian projects has mushroomed over the last five years; this increased popularity is in large part due to the potential for transparency and security that the design of the technology proposes to offer. Yet, we want to ask an important but largely unexplored question in the academic literature about the human rights of the workers who produce these “humanitarian blockchain” solutions: “How can blockchain help eliminate extensive labor exploitation issues embedded within our global supply chains?”

To begin to answer this question, we suggest that proposed humanitarian blockchain solutions must (1) re-purpose the technical affordances of blockchain to address relations of power that, sometimes unwittingly, exploit and prevent workers from collectively exercising their voice; (2) include legally or socially enforceable mechanisms that enable workers to meaningfully voice their knowledge of working conditions without fear of retaliation; and (3) re-frame our current understanding of human rights issues in the context of supply chains to include the labor exploitation within supply chains that produce and sustain the blockchain itself….(More)”.

Apparent Algorithmic Bias and Algorithmic Learning

Paper by Anja Lambrecht and Catherine E. Tucker: “It is worrying to think that algorithms might discriminate against minority groups and reinforce existing inequality. Typically, such concerns have focused on the idea that the algorithm’s code could reflect bias, or the data that feeds the algorithm might lead the algorithm to produce uneven outcomes.

In this paper, we highlight another reason for why algorithms might appear biased against minority groups which is the length of time algorithms need to learn: if an algorithm has access to less data for particular groups, or accesses this data at differential speeds, it will produce differential outcomes, potentially disadvantaging minority groups.

Specifically, we revisit a classic study which documents that searches on Google for black names were more likely to return ads that highlighted the need for a criminal background check than searches for white names. We show that at least a partial explanation for this finding is that if consumer demand for a piece of information is low, an algorithm accumulates information at a lesser speed and thus takes longer to learn about consumer preferences. Since black names are less common, the algorithm learns about the quality of the underlying ad more slowly, and as a result an ad is more likely to persist for searches next to black names even if the algorithm judges the ad to be of low-quality. Therefore, the algorithm may be likely to show an ad — including an undesirable ad — in the context of searches for a disadvantaged group for a longer period of time.

We replicate this result using the context of religious affiliations and present evidence that ads targeted towards searches for religious groups persists for longer for groups that are less searched for. This suggests that the process of algorithmic learning can lead to differential outcomes across those whose characteristics are more common and those who are rarer in society….(More)”.

Responsible Data Toolkit

Andrew Young at The GovLab: “The GovLab and UNICEF, as part of the Responsible Data for Children initiative (RD4C), are pleased to share a set of user-friendly tools to support organizations and practitioners seeking to operationalize the RD4C Principles. These principles—Purpose-Driven, People-Centric, Participatory, Protective of Children’s Rights, Proportional, Professionally Accountable, and Prevention of Harms Across the Data Lifecycle—are especially important in the current moment, as actors around the world are taking a data-driven approach to the fight against COVID-19.

The initial components of the RD4C Toolkit are:

The RD4C Data Ecosystem Mapping Tool intends to help users to identify the systems generating data about children and the key components of those systems. After using this tool, users will be positioned to understand the breadth of data they generate and hold about children; assess data systems’ redundancies or gaps; identify opportunities for responsible data use; and achieve other insights.

The RD4C Decision Provenance Mapping methodology provides a way for actors designing or assessing data investments for children to identify key decision points and determine which internal and external parties influence those decision points. This distillation can help users to pinpoint any gaps and develop strategies for improving decision-making processes and advancing more professionally accountable data practices.

The RD4C Opportunity and Risk Diagnostic provides organizations with a way to take stock of the RD4C principles and how they might be realized as an organization reviews a data project or system. The high-level questions and prompts below are intended to help users identify areas in need of attention and to strategize next steps for ensuring more responsible handling of data for and about children across their organization.

Finally, the Data for Children Collaborative with UNICEF developed an Ethical Assessment that “forms part of [their] safe data ecosystem, alongside data management and data protection policies and practices.” The tool reflects the RD4C Principles and aims to “provide an opportunity for project teams to reflect on the material consequences of their actions, and how their work will have real impacts on children’s lives.

RD4C launched in October 2019 with the release of the RD4C Synthesis ReportSelected Readings, and the RD4C Principles. Last month we published the The RD4C Case Studies, which analyze data systems deployed in diverse country environments, with a focus on their alignment with the RD4C Principles. The case studies are: Romania’s The Aurora ProjectChildline Kenya, and Afghanistan’s Nutrition Online Database.

To learn more about Responsible Data for Children, visit or contact rd4c [at] To join the RD4C conversation and be alerted to future releases, subscribe at this link.”

Can We Track COVID-19 and Protect Privacy at the Same Time?

Sue Halpern at the New Yorker: “…Location data are the bread and butter of “ad tech.” They let marketers know you recently shopped for running shoes, are trying to lose weight, and have an abiding affection for kettle corn. Apps on cell phones emit a constant trail of longitude and latitude readings, making it possible to follow consumers through time and space. Location data are often triangulated with other, seemingly innocuous slivers of personal information—so many, in fact, that a number of data brokers claim to have around five thousand data points on almost every American. It’s a lucrative business—by at least one estimate, the data-brokerage industry is worth two hundred billion dollars. Though the data are often anonymized, a number of studies have shown that they can be easily unmasked to reveal identities—names, addresses, phone numbers, and any number of intimacies.

As Buckee knew, public-health surveillance, which serves the community at large, has always bumped up against privacy, which protects the individual. But, in the past, public-health surveillance was typically conducted by contract tracing, with health-care workers privately interviewing individuals to determine their health status and trace their movements. It was labor-intensive, painstaking, memory-dependent work, and, because of that, it was inherently limited in scope and often incomplete or inefficient. (At the start of the pandemic, there were only twenty-two hundred contact tracers in the country.)

Digital technologies, which work at scale, instantly provide detailed information culled from security cameras, license-plate readers, biometric scans, drones, G.P.S. devices, cell-phone towers, Internet searches, and commercial transactions. They can be useful for public-health surveillance in the same way that they facilitate all kinds of spying by governments, businesses, and malign actors. South Korea, which reported its first covid-19 case a month after the United States, has achieved dramatically lower rates of infection and mortality by tracking citizens with the virus via their phones, car G.P.S. systems, credit-card transactions, and public cameras, in addition to a robust disease-testing program. Israel enlisted Shin Bet, its secret police, to repurpose its terrorist-tracking protocols.  China programmed government-installed cameras to point at infected people’s doorways to monitor their movements….(More)”.

‘For good measure’: data gaps in a big data world

Paper by Sarah Giest & Annemarie Samuels: “Policy and data scientists have paid ample attention to the amount of data being collected and the challenge for policymakers to use and utilize it. However, far less attention has been paid towards the quality and coverage of this data specifically pertaining to minority groups. The paper makes the argument that while there is seemingly more data to draw on for policymakers, the quality of the data in combination with potential known or unknown data gaps limits government’s ability to create inclusive policies. In this context, the paper defines primary, secondary, and unknown data gaps that cover scenarios of knowingly or unknowingly missing data and how that is potentially compensated through alternative measures.

Based on the review of the literature from various fields and a variety of examples highlighted throughout the paper, we conclude that the big data movement combined with more sophisticated methods in recent years has opened up new opportunities for government to use existing data in different ways as well as fill data gaps through innovative techniques. Focusing specifically on the representativeness of such data, however, shows that data gaps affect the economic opportunities, social mobility, and democratic participation of marginalized groups. The big data movement in policy may thus create new forms of inequality that are harder to detect and whose impact is more difficult to predict….(More)“.

National AI Strategies from a human rights perspective

Report by Global Partners Digital: “…looks at existing strategies adopted by governments and regional organisations since 2017. It assesses the extent to which human rights considerations have been incorporated and makes a series of recommendations to policymakers looking to develop or revise AI strategies in the future….

Our report found that while the majority of National AI Strategies mention human rights, very few contain a deep human rights-based analysis or concrete assessment of how various AI applications impact human rights. In all but a few cases, they also lacked depth or specificity on how human rights should be protected in the context of AI, which was in contrast to the level of specificity on other issues such as economic competitiveness or innovation advantage. 

The report provides recommendations to help governments develop human rights-based national AI strategies. These recommendations fall under six broad themes:

  • Include human rights explicitly and throughout the strategy: Thinking about the impact of AI on human rights-and how to mitigate the risks associated with those impacts- should be core to a national strategy. Each section should consider the risks and opportunities AI provides as related to human rights, with a specific focus on at-risk, vulnerable and marginalized communities.
  • Outline specific steps to be taken to ensure human rights are protected: As strategies engage with human rights, they should include specific goals, commitments or actions to ensure that human rights are protected.
  • Build in incentives or specific requirements to ensure rights-respecting practice: Governments should take steps within their strategies to incentivize human rights-respecting practices and actions across all sectors, as well as to ensure that their goals with regards to the protection of human rights are fulfilled.
  • Set out grievance and remediation processes for human rights violations: A National AI Strategy should look at the existing grievance and remedial processes available for victims of human rights violations relating to AI. The strategy should assess whether the process needs revision in light of the particular nature of AI as a technology or in the capacity-building of those involved so that they are able to receive complaints concerning AI.
  • Recognize the regional and international dimensions to AI policy: National strategies should clearly identify relevant regional and global fora and processes relating to AI, and the means by which the government will promote human rights-respecting approaches and outcomes at them through proactive engagement.
  • Include human rights experts and other stakeholders in the drafting of National AI Strategies: When drafting a national strategy, the government should ensure that experts on human rights and the impact of AI on human rights are a core part of the drafting process….(More)”.

The imperative of interpretable machines

Julia Stoyanovich, Jay J. Van Bavel & Tessa V. West at Nature: “As artificial intelligence becomes prevalent in society, a framework is needed to connect interpretability and trust in algorithm-assisted decisions, for a range of stakeholders.

We are in the midst of a global trend to regulate the use of algorithms, artificial intelligence (AI) and automated decision systems (ADS). As reported by the One Hundred Year Study on Artificial Intelligence: “AI technologies already pervade our lives. As they become a central force in society, the field is shifting from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy.” Major cities, states and national governments are establishing task forces, passing laws and issuing guidelines about responsible development and use of technology, often starting with its use in government itself, where there is, at least in theory, less friction between organizational goals and societal values.

In the United States, New York City has made a public commitment to opening the black box of the government’s use of technology: in 2018, an ADS task force was convened, the first of such in the nation, and charged with providing recommendations to New York City’s government agencies for how to become transparent and accountable in their use of ADS. In a 2019 report, the task force recommended using ADS where they are beneficial, reduce potential harm and promote fairness, equity, accountability and transparency2. Can these principles become policy in the face of the apparent lack of trust in the government’s ability to manage AI in the interest of the public? We argue that overcoming this mistrust hinges on our ability to engage in substantive multi-stakeholder conversations around ADS, bringing with it the imperative of interpretability — allowing humans to understand and, if necessary, contest the computational process and its outcomes.

Remarkably little is known about how humans perceive and evaluate algorithms and their outputs, what makes a human trust or mistrust an algorithm3, and how we can empower humans to exercise agency — to adopt or challenge an algorithmic decision. Consider, for example, scoring and ranking — data-driven algorithms that prioritize entities such as individuals, schools, or products and services. These algorithms may be used to determine credit worthiness, and desirability for college admissions or employment. Scoring and ranking are as ubiquitous and powerful as they are opaque. Despite their importance, members of the public often know little about why one person is ranked higher than another by a résumé screening or a credit scoring tool, how the ranking process is designed and whether its results can be trusted.

As an interdisciplinary team of scientists in computer science and social psychology, we propose a framework that forms connections between interpretability and trust, and develops actionable explanations for a diversity of stakeholders, recognizing their unique perspectives and needs. We focus on three questions (Box 1) about making machines interpretable: (1) what are we explaining, (2) to whom are we explaining and for what purpose, and (3) how do we know that an explanation is effective? By asking — and charting the path towards answering — these questions, we can promote greater trust in algorithms, and improve fairness and efficiency of algorithm-assisted decision making…(More)”.

Data & Policy

Data & Policy, an open-access journal exploring the potential of data science for governance and public decision-making, published its first cluster of peer-reviewed articles last week.

The articles include three contributions specifically concerned with data protection by design:

·       Gefion Theurmer and colleagues (University of Southampton) distinguish between data trusts and other data sharing mechanisms and discuss the need for workflows with data protection at their core;

·       Swee Leng Harris (King’s College London) explores Data Protection Impact Assessments as a framework for helping us know whether government use of data is legal, transparent and upholds human rights;

·       Giorgia Bincoletto’s (University of Bologna) study investigates data protection concerns arising from cross-border interoperability of Electronic Health Record systems in the European Union;

Also published, research by Jacqueline Lam and colleagues (University of Cambridge; Hong Kong University) on how fine-grained data from satellites and other sources can help us understand environmental inequality and socio-economic disparities in China, and this also reflects upon the importance of safeguarding data privacy and security. See also the blogs this week on the potential of Data Collaboratives for COVID-19 by Editor-in-Chief Stefaan Verhulst (the GovLab) and how COVID-19 exposes a widening data divide for the Global South, by Stefania Milan (University of Amsterdam) and Emiliano Treré (University of Cardiff).

Data & Policy is an open access, peer-reviewed venue for contributions that consider how systems of policy and data relate to one another. Read the 5 ways you can contribute to Data & Policy and contact with any questions….(More)”.