Can We Track COVID-19 and Protect Privacy at the Same Time?


Sue Halpern at the New Yorker: “…Location data are the bread and butter of “ad tech.” They let marketers know you recently shopped for running shoes, are trying to lose weight, and have an abiding affection for kettle corn. Apps on cell phones emit a constant trail of longitude and latitude readings, making it possible to follow consumers through time and space. Location data are often triangulated with other, seemingly innocuous slivers of personal information—so many, in fact, that a number of data brokers claim to have around five thousand data points on almost every American. It’s a lucrative business—by at least one estimate, the data-brokerage industry is worth two hundred billion dollars. Though the data are often anonymized, a number of studies have shown that they can be easily unmasked to reveal identities—names, addresses, phone numbers, and any number of intimacies.

As Buckee knew, public-health surveillance, which serves the community at large, has always bumped up against privacy, which protects the individual. But, in the past, public-health surveillance was typically conducted by contract tracing, with health-care workers privately interviewing individuals to determine their health status and trace their movements. It was labor-intensive, painstaking, memory-dependent work, and, because of that, it was inherently limited in scope and often incomplete or inefficient. (At the start of the pandemic, there were only twenty-two hundred contact tracers in the country.)

Digital technologies, which work at scale, instantly provide detailed information culled from security cameras, license-plate readers, biometric scans, drones, G.P.S. devices, cell-phone towers, Internet searches, and commercial transactions. They can be useful for public-health surveillance in the same way that they facilitate all kinds of spying by governments, businesses, and malign actors. South Korea, which reported its first covid-19 case a month after the United States, has achieved dramatically lower rates of infection and mortality by tracking citizens with the virus via their phones, car G.P.S. systems, credit-card transactions, and public cameras, in addition to a robust disease-testing program. Israel enlisted Shin Bet, its secret police, to repurpose its terrorist-tracking protocols.  China programmed government-installed cameras to point at infected people’s doorways to monitor their movements….(More)”.

‘For good measure’: data gaps in a big data world


Paper by Sarah Giest & Annemarie Samuels: “Policy and data scientists have paid ample attention to the amount of data being collected and the challenge for policymakers to use and utilize it. However, far less attention has been paid towards the quality and coverage of this data specifically pertaining to minority groups. The paper makes the argument that while there is seemingly more data to draw on for policymakers, the quality of the data in combination with potential known or unknown data gaps limits government’s ability to create inclusive policies. In this context, the paper defines primary, secondary, and unknown data gaps that cover scenarios of knowingly or unknowingly missing data and how that is potentially compensated through alternative measures.

Based on the review of the literature from various fields and a variety of examples highlighted throughout the paper, we conclude that the big data movement combined with more sophisticated methods in recent years has opened up new opportunities for government to use existing data in different ways as well as fill data gaps through innovative techniques. Focusing specifically on the representativeness of such data, however, shows that data gaps affect the economic opportunities, social mobility, and democratic participation of marginalized groups. The big data movement in policy may thus create new forms of inequality that are harder to detect and whose impact is more difficult to predict….(More)“.

National AI Strategies from a human rights perspective


Report by Global Partners Digital: “…looks at existing strategies adopted by governments and regional organisations since 2017. It assesses the extent to which human rights considerations have been incorporated and makes a series of recommendations to policymakers looking to develop or revise AI strategies in the future….

Our report found that while the majority of National AI Strategies mention human rights, very few contain a deep human rights-based analysis or concrete assessment of how various AI applications impact human rights. In all but a few cases, they also lacked depth or specificity on how human rights should be protected in the context of AI, which was in contrast to the level of specificity on other issues such as economic competitiveness or innovation advantage. 

The report provides recommendations to help governments develop human rights-based national AI strategies. These recommendations fall under six broad themes:

  • Include human rights explicitly and throughout the strategy: Thinking about the impact of AI on human rights-and how to mitigate the risks associated with those impacts- should be core to a national strategy. Each section should consider the risks and opportunities AI provides as related to human rights, with a specific focus on at-risk, vulnerable and marginalized communities.
  • Outline specific steps to be taken to ensure human rights are protected: As strategies engage with human rights, they should include specific goals, commitments or actions to ensure that human rights are protected.
  • Build in incentives or specific requirements to ensure rights-respecting practice: Governments should take steps within their strategies to incentivize human rights-respecting practices and actions across all sectors, as well as to ensure that their goals with regards to the protection of human rights are fulfilled.
  • Set out grievance and remediation processes for human rights violations: A National AI Strategy should look at the existing grievance and remedial processes available for victims of human rights violations relating to AI. The strategy should assess whether the process needs revision in light of the particular nature of AI as a technology or in the capacity-building of those involved so that they are able to receive complaints concerning AI.
  • Recognize the regional and international dimensions to AI policy: National strategies should clearly identify relevant regional and global fora and processes relating to AI, and the means by which the government will promote human rights-respecting approaches and outcomes at them through proactive engagement.
  • Include human rights experts and other stakeholders in the drafting of National AI Strategies: When drafting a national strategy, the government should ensure that experts on human rights and the impact of AI on human rights are a core part of the drafting process….(More)”.

The imperative of interpretable machines


Julia Stoyanovich, Jay J. Van Bavel & Tessa V. West at Nature: “As artificial intelligence becomes prevalent in society, a framework is needed to connect interpretability and trust in algorithm-assisted decisions, for a range of stakeholders.

We are in the midst of a global trend to regulate the use of algorithms, artificial intelligence (AI) and automated decision systems (ADS). As reported by the One Hundred Year Study on Artificial Intelligence: “AI technologies already pervade our lives. As they become a central force in society, the field is shifting from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy.” Major cities, states and national governments are establishing task forces, passing laws and issuing guidelines about responsible development and use of technology, often starting with its use in government itself, where there is, at least in theory, less friction between organizational goals and societal values.

In the United States, New York City has made a public commitment to opening the black box of the government’s use of technology: in 2018, an ADS task force was convened, the first of such in the nation, and charged with providing recommendations to New York City’s government agencies for how to become transparent and accountable in their use of ADS. In a 2019 report, the task force recommended using ADS where they are beneficial, reduce potential harm and promote fairness, equity, accountability and transparency2. Can these principles become policy in the face of the apparent lack of trust in the government’s ability to manage AI in the interest of the public? We argue that overcoming this mistrust hinges on our ability to engage in substantive multi-stakeholder conversations around ADS, bringing with it the imperative of interpretability — allowing humans to understand and, if necessary, contest the computational process and its outcomes.

Remarkably little is known about how humans perceive and evaluate algorithms and their outputs, what makes a human trust or mistrust an algorithm3, and how we can empower humans to exercise agency — to adopt or challenge an algorithmic decision. Consider, for example, scoring and ranking — data-driven algorithms that prioritize entities such as individuals, schools, or products and services. These algorithms may be used to determine credit worthiness, and desirability for college admissions or employment. Scoring and ranking are as ubiquitous and powerful as they are opaque. Despite their importance, members of the public often know little about why one person is ranked higher than another by a résumé screening or a credit scoring tool, how the ranking process is designed and whether its results can be trusted.

As an interdisciplinary team of scientists in computer science and social psychology, we propose a framework that forms connections between interpretability and trust, and develops actionable explanations for a diversity of stakeholders, recognizing their unique perspectives and needs. We focus on three questions (Box 1) about making machines interpretable: (1) what are we explaining, (2) to whom are we explaining and for what purpose, and (3) how do we know that an explanation is effective? By asking — and charting the path towards answering — these questions, we can promote greater trust in algorithms, and improve fairness and efficiency of algorithm-assisted decision making…(More)”.

Data & Policy


Data & Policy, an open-access journal exploring the potential of data science for governance and public decision-making, published its first cluster of peer-reviewed articles last week.

The articles include three contributions specifically concerned with data protection by design:

·       Gefion Theurmer and colleagues (University of Southampton) distinguish between data trusts and other data sharing mechanisms and discuss the need for workflows with data protection at their core;

·       Swee Leng Harris (King’s College London) explores Data Protection Impact Assessments as a framework for helping us know whether government use of data is legal, transparent and upholds human rights;

·       Giorgia Bincoletto’s (University of Bologna) study investigates data protection concerns arising from cross-border interoperability of Electronic Health Record systems in the European Union;

Also published, research by Jacqueline Lam and colleagues (University of Cambridge; Hong Kong University) on how fine-grained data from satellites and other sources can help us understand environmental inequality and socio-economic disparities in China, and this also reflects upon the importance of safeguarding data privacy and security. See also the blogs this week on the potential of Data Collaboratives for COVID-19 by Editor-in-Chief Stefaan Verhulst (the GovLab) and how COVID-19 exposes a widening data divide for the Global South, by Stefania Milan (University of Amsterdam) and Emiliano Treré (University of Cardiff).

Data & Policy is an open access, peer-reviewed venue for contributions that consider how systems of policy and data relate to one another. Read the 5 ways you can contribute to Data & Policy and contact dataandpolicy@cambridge.org with any questions….(More)”.

Why we need responsible data for children


Andrew Young and Stefaan Verhulst at The Conversation: “…Without question, the increased use of data poses unique risks for and responsibilities to children. While practitioners may have well-intended purposes to leverage data for and about children, the data systems used are often designed with (consenting) adults in mind without a focus on the unique needs and vulnerabilities of children. This can lead to the collection of inaccurate and unreliable data as well as the inappropriate and potentially harmful use of data for and about children….

Research undertaken in the context of the RD4C initiative uncovered the following trends and realities. These issues make clear why we need a dedicated data responsibility approach for children.

  • Today’s children are the first generation growing up at a time of rapid datafication where almost all aspects of their lives, both on and off-line, are turned into data points. An entire generation of young people is being datafied – often starting even before birth. Every year the average child will have more data collected about them in their lifetime than would a similar child born any year prior. The potential uses of such large volumes of data and the impact on children’s lives are unpredictable, and could potentially be used against them.
  • Children typically do not have full agency to make decisions about their participation in programs or services which may generate and record personal data. Children may also lack the understanding to assess a decision’s purported risks and benefits. Privacy terms and conditions are often barely understood by educated adults, let alone children. As a result, there is a higher duty of care for children’s data.
  • Disaggregating data according to socio-demographic characteristics can improve service delivery and assist with policy development. However, it also creates risks for group privacy. Children can be identified, exposing them to possible harms. Disaggregated data for groups such as child-headed households and children experiencing gender-based violence can put vulnerable communities and children at risk. Data about children’s location itself can be risky, especially if they have some additional vulnerability that could expose them to harm.
  • Mishandling data can cause children to lose trust in institutions that deliver essential services including vaccines, medicine, and nutrition supplies. For organizations dealing with child well-being, these retreats can have severe consequences. Distrust can cause families and children to refuse health, education, child protection and other public services. Such privacy protective behavior can impact children throughout the course of their lifetime, and potentially exacerbate existing inequities and vulnerabilities.
  • As volumes of collected and stored data increase, obligations and protections traditionally put in place for children may be difficult or impossible to uphold. The interests of children are not always prioritized when organizations define their legitimate interest to access or share personal information of children. The immediate benefit of a service provided does not always justify the risk or harm that might be caused by it in the future. Data analysis may be undertaken by people who do not have expertise in the area of child rights, as opposed to traditional research where practitioners are specifically educated in child subject research. Similarly, service providers collecting children’s data are not always specially trained to handle it, as international standards recommend.
  • Recent events around the world reveal the promise and pitfalls of algorithmic decision-making. While it can expedite certain processes, algorithms and their inferences can possess biases that can have adverse effects on people, for example those seeking medical care and attempting to secure jobs. The danger posed by algorithmic bias is especially pronounced for children and other vulnerable populations. These groups often lack the awareness or resources necessary to respond to instances of bias or to rectify any misconceptions or inaccuracies in their data.
  • Many of the children served by child welfare organizations have suffered trauma. Whether physical, social, emotional in nature, repeatedly making children register for services or provide confidential personal information can amount to revictimization – re-exposing them to traumas or instigating unwarranted feelings of shame and guilt.

These trends and realities make clear the need for new approaches for maximizing the value of data to improve children’s lives, while mitigating the risks posed by our increasingly datafied society….(More)”.

The world after coronavirus


Yuval Noah Harari at the Financial Times: “Humankind is now facing a global crisis. Perhaps the biggest crisis of our generation. The decisions people and governments take in the next few weeks will probably shape the world for years to come. They will shape not just our healthcare systems but also our economy, politics and culture. We must act quickly and decisively. We should also take into account the long-term consequences of our actions.

When choosing between alternatives, we should ask ourselves not only how to overcome the immediate threat, but also what kind of world we will inhabit once the storm passes. Yes, the storm will pass, humankind will survive, most of us will still be alive — but we will inhabit a different world.  Many short-term emergency measures will become a fixture of life. That is the nature of emergencies. They fast-forward historical processes.

Decisions that in normal times could take years of deliberation are passed in a matter of hours. Immature and even dangerous technologies are pressed into service, because the risks of doing nothing are bigger. Entire countries serve as guinea-pigs in large-scale social experiments. What happens when everybody works from home and communicates only at a distance? What happens when entire schools and universities go online? In normal times, governments, businesses and educational boards would never agree to conduct such experiments. But these aren’t normal times. 

In this time of crisis, we face two particularly important choices. The first is between totalitarian surveillance and citizen empowerment. The second is between nationalist isolation and global solidarity. 

Under-the-skin surveillance

In order to stop the epidemic, entire populations need to comply with certain guidelines. There are two main ways of achieving this. One method is for the government to monitor people, and punish those who break the rules. Today, for the first time in human history, technology makes it possible to monitor everyone all the time. Fifty years ago, the KGB couldn’t follow 240m Soviet citizens 24 hours a day, nor could the KGB hope to effectively process all the information gathered. The KGB relied on human agents and analysts, and it just couldn’t place a human agent to follow every citizen. But now governments can rely on ubiquitous sensors and powerful algorithms instead of flesh-and-blood spooks. 

In their battle against the coronavirus epidemic several governments have already deployed the new surveillance tools. The most notable case is China. By closely monitoring people’s smartphones, making use of hundreds of millions of face-recognising cameras, and obliging people to check and report their body temperature and medical condition, the Chinese authorities can not only quickly identify suspected coronavirus carriers, but also track their movements and identify anyone they came into contact with. A range of mobile apps warn citizens about their proximity to infected patients…

If I could track my own medical condition 24 hours a day, I would learn not only whether I have become a health hazard to other people, but also which habits contribute to my health. And if I could access and analyse reliable statistics on the spread of coronavirus, I would be able to judge whether the government is telling me the truth and whether it is adopting the right policies to combat the epidemic. Whenever people talk about surveillance, remember that the same surveillance technology can usually be used not only by governments to monitor individuals — but also by individuals to monitor governments. 

The coronavirus epidemic is thus a major test of citizenship….(More)”.

Beyond a Human Rights-based approach to AI Governance: Promise, Pitfalls, Plea


Paper by Nathalie A. Smuha: “This paper discusses the establishment of a governance framework to secure the development and deployment of “good AI”, and describes the quest for a morally objective compass to steer it. Asserting that human rights can provide such compass, this paper first examines what a human rights-based approach to AI governance entails, and sets out the promise it propagates. Subsequently, it examines the pitfalls associated with human rights, particularly focusing on the criticism that these rights may be too Western, too individualistic, too narrow in scope and too abstract to form the basis of sound AI governance. After rebutting these reproaches, a plea is made to move beyond the calls for a human rights-based approach, and start taking the necessary steps to attain its realisation. It is argued that, without elucidating the applicability and enforceability of human rights in the context of AI; adopting legal rules that concretise those rights where appropriate; enhancing existing enforcement mechanisms; and securing an underlying societal infrastructure that enables human rights in the first place, any human rights-based governance framework for AI risks falling short of its purpose….(More)”.

World Justice Project (WJP) Rule of Law Index®


Interactive Overview: “The World Justice Project (WJP) Rule of Law Index® is the world’s leading source for original, independent data on the rule of law. Now covering 128 countries and jurisdictions, the Index relies on national surveys of more than 130,000 households and 4,000 legal practitioners and experts to measure how the rule of law is experienced and perceived around the world.

Effective rule of law reduces corruption, combats poverty and disease, and protects people from injustices large and small. It is the foundation for communities of justice, opportunity, and peace—underpinning development, accountable government, and respect for fundamental rights.

Learn more about the rule of law and explore the full WJP Rule of Law Index 2020 report, including PDF report download, data insights, methodology, and more at the Index report resources page….(More)”

CARE Principles for Indigenous Data Governance


The Global Indigenous Data Alliance: “The current movement toward open data and open science does not fully engage with Indigenous Peoples rights and interests. Existing principles within the open data movement (e.g. FAIR: findable, accessible, interoperable, reusable) primarily focus on characteristics of data that will facilitate increased data sharing among entities while ignoring power differentials and historical contexts. The emphasis on greater data sharing alone creates a tension for Indigenous Peoples who are also asserting greater control over the application and use of Indigenous data and Indigenous Knowledge for collective benefit.

This includes the right to create value from Indigenous data in ways that are grounded in Indigenous worldviews and realise opportunities within the knowledge economy. The CARE Principles for Indigenous Data Governance are people and purpose-oriented, reflecting the crucial role of data in advancing Indigenous innovation and self-determination. These principles complement the existing FAIR principles encouraging open and other data movements to consider both people and purpose in their advocacy and pursuits….(More)”.