National Academies, National Science Foundation Create Network to Connect Decision-Makers with Social Scientists on Pressing COVID-19 Questions


Press Release: “The National Academies of Sciences, Engineering, and Medicine and the National Science Foundation announced today the formation of a Societal Experts Action Network (SEAN) to connect social and behavioral science researchers with decision-makers who are leading the response to COVID-19. SEAN will respond to the most pressing social, behavioral, and economic questions that are being asked by federal, state, and local officials by working with appropriate experts to quickly provide actionable answers.

The new network’s activities will be overseen by an executive committee in coordination with the National Academies’ Standing Committee on Emerging Infectious Diseases and 21st Century Health Threats, established earlier this year to provide rapid expert input on urgent questions facing the federal government on the COVID-19 pandemic. Standing committee members Robert Groves, executive vice president and provost at Georgetown University, and Mary T. Bassett, director of the François-Xavier Bagnoud Center for Health and Human Rights at Harvard University, will co-chair the executive committee to manage SEAN’s solicitation of questions and expert responses, anticipate leaders’ research needs, and guide the dissemination of network findings.

SEAN will include individual researchers from a broad range of disciplines as well as leading national social and behavioral science institutions. Responses to decision-maker requests may range from individual phone calls and presentations to written committee documents such as Rapid Expert Consultations.

“This pandemic has broadly impacted all aspects of life — not just our health, but our work, families, education, supply chains, and even the global environment,” said Marcia McNutt, president of the National Academy of Sciences. “Therefore, to address the myriad questions that are being raised by mayors, governors, local representatives, and other leaders, we must recruit the full range of scientific expertise from across the social, natural, and biomedical sciences.”   

“Our communities and our society at large are facing a range of complex issues on multiple fronts due to COVID-19,” said Arthur Lupia, head of the Directorate for Social, Behavioral, and Economic Sciences at the National Science Foundation. “These are human-centered issues affecting our daily lives — the education and well-being of our children, the strength of our economy, the health of our loved ones, neighbors, and so many more. Through SEAN, social and behavioral scientists will provide actionable, evidence-driven guidance to our leaders across the U.S. who are working to support our communities and speed their recovery.”…(More)”.

Examining the Black Box: Tools for Assessing Algorithmic Systems


Report by the Ada Lovelace Institute and DataKind UK: “As algorithmic systems become more critical to decision making across many parts of society, there is increasing interest in how they can be scrutinised and assessed for societal impact, and regulatory and normative compliance.

This report is primarily aimed at policymakers, to inform more accurate and focused policy conversations. It may also be helpful to anyone who creates, commissions or interacts with an algorithmic system and wants to know what methods or approaches exist to assess and evaluate that system…

Clarifying terms and approaches

Through literature review and conversations with experts from a range of disciplines, we’ve identified four prominent approaches to assessing algorithms that are often referred to by just two terms: algorithm audit and algorithmic impact assessment. But there is not always agreement on what these terms mean among different communities: social scientists, computer scientists, policymakers and the general public have different interpretations and frames of reference.

While there is broad enthusiasm among policymakers for algorithm audits and impact assessments, there is often lack of detail about the approaches being discussed. This stems both from the confusion of terms, but also from the different maturity of the approaches the terms describe.

Clarifying which approach we’re referring to, as well as where further research is needed, will help policymakers and practitioners to do the more vital work of building evidence and methodology to take these approaches forward.

We focus on algorithm audit and algorithmic impact assessment. For each, we identify two key approaches the terms can be interpreted as:

  • Algorithm audit
    • Bias audit: a targeted, non-comprehensive approach focused on assessing algorithmic systems for bias
    • Regulatory inspection: a broad approach, focused on an algorithmic system’s compliance with regulation or norms, necessitating a number of different tools and methods; typically performed by regulators or auditing professionals
  • Algorithmic impact assessment
    • Algorithmic risk assessment: assessing possible societal impacts of an algorithmic system before the system is in use (with ongoing monitoring often advised)
    • Algorithmic impact evaluation: assessing possible societal impacts of an algorithmic system on the users or population it affects after it is in use…(More)”.

.

Responsible Data Toolkit


Andrew Young at The GovLab: “The GovLab and UNICEF, as part of the Responsible Data for Children initiative (RD4C), are pleased to share a set of user-friendly tools to support organizations and practitioners seeking to operationalize the RD4C Principles. These principles—Purpose-Driven, People-Centric, Participatory, Protective of Children’s Rights, Proportional, Professionally Accountable, and Prevention of Harms Across the Data Lifecycle—are especially important in the current moment, as actors around the world are taking a data-driven approach to the fight against COVID-19.

The initial components of the RD4C Toolkit are:

The RD4C Data Ecosystem Mapping Tool intends to help users to identify the systems generating data about children and the key components of those systems. After using this tool, users will be positioned to understand the breadth of data they generate and hold about children; assess data systems’ redundancies or gaps; identify opportunities for responsible data use; and achieve other insights.

The RD4C Decision Provenance Mapping methodology provides a way for actors designing or assessing data investments for children to identify key decision points and determine which internal and external parties influence those decision points. This distillation can help users to pinpoint any gaps and develop strategies for improving decision-making processes and advancing more professionally accountable data practices.

The RD4C Opportunity and Risk Diagnostic provides organizations with a way to take stock of the RD4C principles and how they might be realized as an organization reviews a data project or system. The high-level questions and prompts below are intended to help users identify areas in need of attention and to strategize next steps for ensuring more responsible handling of data for and about children across their organization.

Finally, the Data for Children Collaborative with UNICEF developed an Ethical Assessment that “forms part of [their] safe data ecosystem, alongside data management and data protection policies and practices.” The tool reflects the RD4C Principles and aims to “provide an opportunity for project teams to reflect on the material consequences of their actions, and how their work will have real impacts on children’s lives.

RD4C launched in October 2019 with the release of the RD4C Synthesis ReportSelected Readings, and the RD4C Principles. Last month we published the The RD4C Case Studies, which analyze data systems deployed in diverse country environments, with a focus on their alignment with the RD4C Principles. The case studies are: Romania’s The Aurora ProjectChildline Kenya, and Afghanistan’s Nutrition Online Database.

To learn more about Responsible Data for Children, visit rd4c.org or contact rd4c [at] thegovlab.org. To join the RD4C conversation and be alerted to future releases, subscribe at this link.”

From Idea to Reality: Why We Need an Open Data Policy Lab


Stefaan G. Verhulst at Open Data Policy Lab: “The belief that we are living in a data age — one characterized by unprecedented amounts of data, with unprecedented potential — has become mainstream. We regularly read phrases such as “data is the most valuable commodity in the global economy” or that data provides decision-makers with an “ever-swelling flood of information.”

Without a doubt, there is truth in such statements. But they also leave out a major shortcoming — the fact that much of the most useful data continue to remain inaccessible, hidden in silos, behind digital walls, and in untapped “treasuries.”

For close to a decade, the technology and public interest community have pushed the idea of open data. At its core, open data represents a new paradigm of data availability and access. The movement borrows from the language of open source and is rooted in notions of a “knowledge commons”, a concept developed, among others, by scholars like Nobel Prize winner Elinor Ostrom.

Milestones and Limitations in Open Data

Significant milestones have been achieved in the short history of the open data movement. Around the world, an ever-increasing number of governments at the local, state and national levels now release large datasets for the public’s benefit. For example, New York City requires that all public data be published on a single web portal. The current portal site contains thousands of datasets that fuel projects on topics as diverse as school bullying, sanitation, and police conduct. In California, the Forest Practice Watershed Mapper allows users to track the impact of timber harvesting on aquatic life through the use of the state’s open data. Similarly, Denmark’s Building and Dwelling Register releases address data to the public free of charge, improving transparent property assessment for all interested parties.

A growing number of private companies have also initiated or engaged in “Data Collaborative”projects to leverage their private data toward the public interest. For example, Valassis, a direct-mail marketing company, shared its massive address database with community groups in New Orleans to visualize and track block-by-block repopulation rates after Hurricane Katrina. A wide number of data collaboratives are also currently being launched to respond to the COVID-19 pandemic. Through its COVID-19 Data Collaborative Program, the location-intelligence company Cuebiq is providing researchers access to the company’s data to study, for instance, the impacts of social distancing policies in Italy and New York City. The health technology company Kinsa Health’s US Health Weather initiative is likewise visualizing the rate of fever across the United States using data from its network of Smart Thermometers, thereby providing early indications regarding the location of likely COVID-19 outbreaks.

Yet despite such initiatives, many open data projects (and data collaboratives) remain fledgling — especially those at the state and local level.

Among other issues, the field has trouble scaling projects beyond initial pilots, and many potential stakeholders — private sector and government “owners” of data, as well as public beneficiaries — remain skeptical of open data’s value. In addition, terabytes of potentially transformative data remain inaccessible for re-use. It is absolutely imperative that we continue to make the case to all stakeholders regarding the importance of open data, and of moving it from an interesting idea to an impactful reality. In order to do this, we need a new resource — one that can inform the public and data owners, and that would guide decision-makers on how to achieve open data in a responsible manner, without undermining privacy and other rights.

Purpose of the Open Data Policy Lab

Today, with support from Microsoft and under the counsel of a global advisory board of open data leaders, The GovLab is launching an initiative designed precisely to build such a resource.

Our Open Data Policy Lab will draw on lessons and experiences from around the world to conduct analysis, provide guidance, build community, and take action to accelerate the responsible re-use and opening of data for the benefit of society and the equitable spread of economic opportunity…(More)”.

Congress in Crisis: How Legislatures are Continuing to Meet during the Pandemic


The GovLab: “In response to the COVID-19 pandemic, legislatures at the national, state and local level are adapting to keep the lawmaking process going while minimizing the need for face-to-face meetings. While some have simply lowered quorum thresholds or reduced the number of sessions while continuing to meet in person, others are trialing more ambitious remote participation systems where lawmakers convene, deliberate, and vote virtually. Still others have used shift as an opportunity to create mechanisms for greater civic engagement.

For a short overview of how legislatures in Brazil, Chile, France, and other countries are using technology to convene, deliberate and vote remotely, see the GovLab’s short video, Continuity of Congress.”

COVID-19 Rapid Evidence Review: Exit through the App Store?


“A rapid evidence review of the technical considerations and societal implications of using technology to transition from the COVID-19 crisis” by the Ada Lovelace Institute:  “The review focuses on three technologies in particular: digital contact tracing, symptom tracking apps and immunity certification. It makes pragmatic recommendations to support well-informed policymaking in response to the crisis. It is informed by the input of more than twenty experts drawn from across a wide range of domains, including technology, policy, human rights and data protection, public health and clinical medicine, behavioural science and information systems, philosophy, sociology and anthropology.

The purpose of this review is to open up, rather than close down, an informed and public dialogue on the technical considerations and societal implications of the use of technology to transition from the crisis.

Key findings

There is an absence of evidence to support the immediate national deployment of symptom tracking applications, digital contact tracing applications and digital immunity certificates. While the Government is right to explore non-clinical measures for transition, for national policy to rely on these apps, they would need to be able to:

  1. Represent accurate information about infection or immunity
  2. Demonstrate technical capabilities to support required functions
  3. Address various practical issues for use, including meeting legal tests
  4. Mitigate social risks and protect against exacerbating inequalities and vulnerabilities

At present the evidence does not demonstrate that tools are able to address these four components adequately. We offer detailed evidence, and recommendations for each application in the report summary.

In particular, we recommend that:

  • Effective deployment of technology to support the transition from the crisis will be contingent on public trust and confidence, which can be strengthened through the establishment of two accountability mechanisms:
    • the Group of Advisors on Technology in Emergencies (GATE) to review evidence, advise on design and oversee implementation, similar to the expert group recently established by Canada’s Chief Science Adviser; and
    • an independent oversight mechanism to conduct real-time scrutiny of policy formulation.
  • Clear and comprehensive primary legislation should be advanced to regulate data processing in symptom tracking and digital contact tracing applications. Legislation should impose strict purpose, access and time limitations…(More)”.

EDPB Adopts Guidelines on the Processing of Health Data During COVID-19


Hunton Privacy Blog: “On April 21, 2020, the European Data Protection Board (“EDPB”) adopted Guidelines on the processing of health data for scientific purposes in the context of the COVID-19 pandemic. The aim of the Guidelines is to provide clarity on the most urgent matters relating to health data, such as legal basis for processing, the implementation of adequate safeguards and the exercise of data subject rights.

The Guidelines note that the General Data Protection Regulation (“GDPR”) provides a specific derogation to the prohibition on processing of sensitive data under Article 9, for scientific purposes. With respect to the legal basis for processing, the Guidelines state that consent may be relied on under both Article 6 and the derogation to the prohibition on processing under Article 9 in the context of COVID-19, as long as the requirements for explicit consent are met, and as long as there is no power imbalance that could pressure or disadvantage a reluctant data subject. Researchers should keep in mind that study participants must be able to withdraw their consent at any time. National legislation may also provide an appropriate legal basis for the processing of health data and a derogation to the Article 9 prohibition. Furthermore, national laws may restrict data subject rights, though these restrictions should apply only as is strictly necessary.

In the context of transfers to countries outside the European Economic Area that have not been deemed adequate by the European Commission, the Guidelines note that the “public interest” derogation to the general prohibition on such transfers may be relied on, as well as explicit consent. The Guidelines add, however, that these derogations should only be relied on as a temporary measure and not for repetitive transfers.

The Guidelines highlight the importance of complying with the GDPR’s data protection principles, particularly with respect to transparency. Ideally, notice of processing as part of a research project should be provided to the relevant data subject before the project commences, if data has not been collected directly from the individual, in order to allow the individual to exercise their rights under the GDPR. There may be instances where, considering the number of data subjects, the age of the data and the safeguards in place, it would be impossible or require disproportionate effort to provide notice, in which case researchers may be able to rely on the exemptions set out under Article 14 of the GDPR.

The Guidelines also highlight that processing for scientific purposes is generally not considered incompatible with the purposes for which data is originally collected, assuming that the principles of data minimization, integrity, confidentiality and data protection by design and by default are complied with (See Guidelines)”.

National AI Strategies from a human rights perspective


Report by Global Partners Digital: “…looks at existing strategies adopted by governments and regional organisations since 2017. It assesses the extent to which human rights considerations have been incorporated and makes a series of recommendations to policymakers looking to develop or revise AI strategies in the future….

Our report found that while the majority of National AI Strategies mention human rights, very few contain a deep human rights-based analysis or concrete assessment of how various AI applications impact human rights. In all but a few cases, they also lacked depth or specificity on how human rights should be protected in the context of AI, which was in contrast to the level of specificity on other issues such as economic competitiveness or innovation advantage. 

The report provides recommendations to help governments develop human rights-based national AI strategies. These recommendations fall under six broad themes:

  • Include human rights explicitly and throughout the strategy: Thinking about the impact of AI on human rights-and how to mitigate the risks associated with those impacts- should be core to a national strategy. Each section should consider the risks and opportunities AI provides as related to human rights, with a specific focus on at-risk, vulnerable and marginalized communities.
  • Outline specific steps to be taken to ensure human rights are protected: As strategies engage with human rights, they should include specific goals, commitments or actions to ensure that human rights are protected.
  • Build in incentives or specific requirements to ensure rights-respecting practice: Governments should take steps within their strategies to incentivize human rights-respecting practices and actions across all sectors, as well as to ensure that their goals with regards to the protection of human rights are fulfilled.
  • Set out grievance and remediation processes for human rights violations: A National AI Strategy should look at the existing grievance and remedial processes available for victims of human rights violations relating to AI. The strategy should assess whether the process needs revision in light of the particular nature of AI as a technology or in the capacity-building of those involved so that they are able to receive complaints concerning AI.
  • Recognize the regional and international dimensions to AI policy: National strategies should clearly identify relevant regional and global fora and processes relating to AI, and the means by which the government will promote human rights-respecting approaches and outcomes at them through proactive engagement.
  • Include human rights experts and other stakeholders in the drafting of National AI Strategies: When drafting a national strategy, the government should ensure that experts on human rights and the impact of AI on human rights are a core part of the drafting process….(More)”.

Mobile applications to support contact tracing in the EU’s fight against COVID-19


Common EU Toolbox for Member States by eHealth Network: “Mobile apps have potential to bolster contact tracing strategies to contain and reverse the spread of COVID-19. EU Member States are converging towards effective app solutions that minimise the processing of personal data, and recognise that interoperability between these apps can support public health authorities and support the reopening of the EU’s internal borders.

This first iteration of a common EU toolbox, developed urgently and collaboratively by the e-Health Network with the support of the European Commission, provides a practical guide for Member States. The common approach aims to exploit the latest privacy-enhancing technological solutions that enable at-risk individuals to be contacted and, if necessarily, to be tested as quickly as possible, regardless of where she is and the app she is using. It explains the essential requirements for national apps, namely that they be:

  • voluntary;
  • approved by the national health authority;
  • privacy-preserving – personal data is securely encrypted; and
  • dismantled as soon as no longer needed.

The added value of these apps is that they can record contacts that a person may not notice or remember. These requirements on how to record contacts and notify individuals are anchored in accepted epidemiological guidance, and reflect best practice on cybersecurity, and accessibility. They cover how to prevent the appearance of potentially harmful unapproved apps, success criteria and collectively monitoring the effectiveness of the apps, and the outline of a communications strategy to engage with stakeholders and the people affected by these initiatives.

Work will continue urgently to develop further and implement the toolbox, as set out in the Commission Recommendation of 8 April, including addressing other types of apps and the use of mobility data for modelling to understand the spread of the disease and exit from the crisis….(More)”.

Embracing digital government during the pandemic and beyond


UN DESA Policy Brief: “…Involving civil society organizations, businesses, social entrepreneurs and the general public in managing the COVID-19 pandemic and its aftermath can prove to be highly effective for policy- and decision-makers. Online engagement initiatives led by governments can help people cope with the crisis as well as improve government operations. In a crisis situation, it becomes more important than ever to reach out to vulnerable groups in society, respond to their needs and ensure social stability. Engaging with civil society allows governments to tackle socio-economic challenges in a more productive way that leaves no one behind….

Since the crisis has put public services under stress, governments are urged to deploy effective digital technologies to contain the outbreak. Most innovative quick-to-market solutions have stemmed from the private sector. However, the crisis has exposed the need for government leadership in the development and adoption of new technologies such as artificial intelligence (AI) and robotics to ensure an effective provision of public services…

The efforts in developing digital government strategies after the COVID-19 crisis should focus on improving data protection and digital inclusion policies as well as on strengthening the policy and technical capabilities of public institutions. Even though public-private partnerships are essential for implementing innovative technologies, government leadership, strong institutions and effective public policies are crucial to tailor digital solutions to countries’ needs as well as prioritize security, equity and the protection of people’s rights. The COVID-19 pandemic has emphasized the importance of technology, but also the pivotal role of an effective, inclusive and accountable government….(More)”.