Congress in Crisis: How Legislatures are Continuing to Meet during the Pandemic


The GovLab: “In response to the COVID-19 pandemic, legislatures at the national, state and local level are adapting to keep the lawmaking process going while minimizing the need for face-to-face meetings. While some have simply lowered quorum thresholds or reduced the number of sessions while continuing to meet in person, others are trialing more ambitious remote participation systems where lawmakers convene, deliberate, and vote virtually. Still others have used shift as an opportunity to create mechanisms for greater civic engagement.

For a short overview of how legislatures in Brazil, Chile, France, and other countries are using technology to convene, deliberate and vote remotely, see the GovLab’s short video, Continuity of Congress.”

COVID-19 Rapid Evidence Review: Exit through the App Store?


“A rapid evidence review of the technical considerations and societal implications of using technology to transition from the COVID-19 crisis” by the Ada Lovelace Institute:  “The review focuses on three technologies in particular: digital contact tracing, symptom tracking apps and immunity certification. It makes pragmatic recommendations to support well-informed policymaking in response to the crisis. It is informed by the input of more than twenty experts drawn from across a wide range of domains, including technology, policy, human rights and data protection, public health and clinical medicine, behavioural science and information systems, philosophy, sociology and anthropology.

The purpose of this review is to open up, rather than close down, an informed and public dialogue on the technical considerations and societal implications of the use of technology to transition from the crisis.

Key findings

There is an absence of evidence to support the immediate national deployment of symptom tracking applications, digital contact tracing applications and digital immunity certificates. While the Government is right to explore non-clinical measures for transition, for national policy to rely on these apps, they would need to be able to:

  1. Represent accurate information about infection or immunity
  2. Demonstrate technical capabilities to support required functions
  3. Address various practical issues for use, including meeting legal tests
  4. Mitigate social risks and protect against exacerbating inequalities and vulnerabilities

At present the evidence does not demonstrate that tools are able to address these four components adequately. We offer detailed evidence, and recommendations for each application in the report summary.

In particular, we recommend that:

  • Effective deployment of technology to support the transition from the crisis will be contingent on public trust and confidence, which can be strengthened through the establishment of two accountability mechanisms:
    • the Group of Advisors on Technology in Emergencies (GATE) to review evidence, advise on design and oversee implementation, similar to the expert group recently established by Canada’s Chief Science Adviser; and
    • an independent oversight mechanism to conduct real-time scrutiny of policy formulation.
  • Clear and comprehensive primary legislation should be advanced to regulate data processing in symptom tracking and digital contact tracing applications. Legislation should impose strict purpose, access and time limitations…(More)”.

EDPB Adopts Guidelines on the Processing of Health Data During COVID-19


Hunton Privacy Blog: “On April 21, 2020, the European Data Protection Board (“EDPB”) adopted Guidelines on the processing of health data for scientific purposes in the context of the COVID-19 pandemic. The aim of the Guidelines is to provide clarity on the most urgent matters relating to health data, such as legal basis for processing, the implementation of adequate safeguards and the exercise of data subject rights.

The Guidelines note that the General Data Protection Regulation (“GDPR”) provides a specific derogation to the prohibition on processing of sensitive data under Article 9, for scientific purposes. With respect to the legal basis for processing, the Guidelines state that consent may be relied on under both Article 6 and the derogation to the prohibition on processing under Article 9 in the context of COVID-19, as long as the requirements for explicit consent are met, and as long as there is no power imbalance that could pressure or disadvantage a reluctant data subject. Researchers should keep in mind that study participants must be able to withdraw their consent at any time. National legislation may also provide an appropriate legal basis for the processing of health data and a derogation to the Article 9 prohibition. Furthermore, national laws may restrict data subject rights, though these restrictions should apply only as is strictly necessary.

In the context of transfers to countries outside the European Economic Area that have not been deemed adequate by the European Commission, the Guidelines note that the “public interest” derogation to the general prohibition on such transfers may be relied on, as well as explicit consent. The Guidelines add, however, that these derogations should only be relied on as a temporary measure and not for repetitive transfers.

The Guidelines highlight the importance of complying with the GDPR’s data protection principles, particularly with respect to transparency. Ideally, notice of processing as part of a research project should be provided to the relevant data subject before the project commences, if data has not been collected directly from the individual, in order to allow the individual to exercise their rights under the GDPR. There may be instances where, considering the number of data subjects, the age of the data and the safeguards in place, it would be impossible or require disproportionate effort to provide notice, in which case researchers may be able to rely on the exemptions set out under Article 14 of the GDPR.

The Guidelines also highlight that processing for scientific purposes is generally not considered incompatible with the purposes for which data is originally collected, assuming that the principles of data minimization, integrity, confidentiality and data protection by design and by default are complied with (See Guidelines)”.

National AI Strategies from a human rights perspective


Report by Global Partners Digital: “…looks at existing strategies adopted by governments and regional organisations since 2017. It assesses the extent to which human rights considerations have been incorporated and makes a series of recommendations to policymakers looking to develop or revise AI strategies in the future….

Our report found that while the majority of National AI Strategies mention human rights, very few contain a deep human rights-based analysis or concrete assessment of how various AI applications impact human rights. In all but a few cases, they also lacked depth or specificity on how human rights should be protected in the context of AI, which was in contrast to the level of specificity on other issues such as economic competitiveness or innovation advantage. 

The report provides recommendations to help governments develop human rights-based national AI strategies. These recommendations fall under six broad themes:

  • Include human rights explicitly and throughout the strategy: Thinking about the impact of AI on human rights-and how to mitigate the risks associated with those impacts- should be core to a national strategy. Each section should consider the risks and opportunities AI provides as related to human rights, with a specific focus on at-risk, vulnerable and marginalized communities.
  • Outline specific steps to be taken to ensure human rights are protected: As strategies engage with human rights, they should include specific goals, commitments or actions to ensure that human rights are protected.
  • Build in incentives or specific requirements to ensure rights-respecting practice: Governments should take steps within their strategies to incentivize human rights-respecting practices and actions across all sectors, as well as to ensure that their goals with regards to the protection of human rights are fulfilled.
  • Set out grievance and remediation processes for human rights violations: A National AI Strategy should look at the existing grievance and remedial processes available for victims of human rights violations relating to AI. The strategy should assess whether the process needs revision in light of the particular nature of AI as a technology or in the capacity-building of those involved so that they are able to receive complaints concerning AI.
  • Recognize the regional and international dimensions to AI policy: National strategies should clearly identify relevant regional and global fora and processes relating to AI, and the means by which the government will promote human rights-respecting approaches and outcomes at them through proactive engagement.
  • Include human rights experts and other stakeholders in the drafting of National AI Strategies: When drafting a national strategy, the government should ensure that experts on human rights and the impact of AI on human rights are a core part of the drafting process….(More)”.

Mobile applications to support contact tracing in the EU’s fight against COVID-19


Common EU Toolbox for Member States by eHealth Network: “Mobile apps have potential to bolster contact tracing strategies to contain and reverse the spread of COVID-19. EU Member States are converging towards effective app solutions that minimise the processing of personal data, and recognise that interoperability between these apps can support public health authorities and support the reopening of the EU’s internal borders.

This first iteration of a common EU toolbox, developed urgently and collaboratively by the e-Health Network with the support of the European Commission, provides a practical guide for Member States. The common approach aims to exploit the latest privacy-enhancing technological solutions that enable at-risk individuals to be contacted and, if necessarily, to be tested as quickly as possible, regardless of where she is and the app she is using. It explains the essential requirements for national apps, namely that they be:

  • voluntary;
  • approved by the national health authority;
  • privacy-preserving – personal data is securely encrypted; and
  • dismantled as soon as no longer needed.

The added value of these apps is that they can record contacts that a person may not notice or remember. These requirements on how to record contacts and notify individuals are anchored in accepted epidemiological guidance, and reflect best practice on cybersecurity, and accessibility. They cover how to prevent the appearance of potentially harmful unapproved apps, success criteria and collectively monitoring the effectiveness of the apps, and the outline of a communications strategy to engage with stakeholders and the people affected by these initiatives.

Work will continue urgently to develop further and implement the toolbox, as set out in the Commission Recommendation of 8 April, including addressing other types of apps and the use of mobility data for modelling to understand the spread of the disease and exit from the crisis….(More)”.

Embracing digital government during the pandemic and beyond


UN DESA Policy Brief: “…Involving civil society organizations, businesses, social entrepreneurs and the general public in managing the COVID-19 pandemic and its aftermath can prove to be highly effective for policy- and decision-makers. Online engagement initiatives led by governments can help people cope with the crisis as well as improve government operations. In a crisis situation, it becomes more important than ever to reach out to vulnerable groups in society, respond to their needs and ensure social stability. Engaging with civil society allows governments to tackle socio-economic challenges in a more productive way that leaves no one behind….

Since the crisis has put public services under stress, governments are urged to deploy effective digital technologies to contain the outbreak. Most innovative quick-to-market solutions have stemmed from the private sector. However, the crisis has exposed the need for government leadership in the development and adoption of new technologies such as artificial intelligence (AI) and robotics to ensure an effective provision of public services…

The efforts in developing digital government strategies after the COVID-19 crisis should focus on improving data protection and digital inclusion policies as well as on strengthening the policy and technical capabilities of public institutions. Even though public-private partnerships are essential for implementing innovative technologies, government leadership, strong institutions and effective public policies are crucial to tailor digital solutions to countries’ needs as well as prioritize security, equity and the protection of people’s rights. The COVID-19 pandemic has emphasized the importance of technology, but also the pivotal role of an effective, inclusive and accountable government….(More)”.

How can digital tools support deliberation?


 Claudia Chwalisz at the OECD: “As part of our work on Innovative Citizen Participation, we’ve launched a series of articles to open a discussion and gather evidence on the use of digital tools and practices in representative deliberative processes. ….The current context is obliging policy makers and practitioners to think outside the box and adapt to the inability of physical deliberation. How can digital tools allow planned or ongoing processes like Citizens’ Assemblies to continue, ensuring that policy makers can still garner informed citizen recommendations to inform their decision making? New experiments are getting underway, and the evidence gathered could also be applied to other situations when face-to-face is not possible or more difficult like international processes or any situation that prevents physical gathering.

This series will cover the core phases that a representative deliberative process should follow, as established in the forthcoming OECD report: learning, deliberation, decision making, and collective recommendations. Due to the different nature of conducting a process online, we will additionally consider a phase required before learning: skills training. The articles will explore the use of digital tools at each phase, covering questions about the appropriate tools, methods, evidence, and limitations.

They will also consider how the use of certain digital tools could enhance good practice principles such as impact, transparency, and evaluation:

  • Impact: Digital tools can help participants and the public to better monitor the status of the proposed recommendations and the impact they had on final decision- making. A parallel can be drawn with the extensive use of this methodology by the United Nations for the monitoring and evaluation of the impact of the Sustainable Development Goals (SDGs).
  • Transparency: Digital tools can facilitate transparency across the process. The use of collaborative tools allows for transparency regarding who wrote the final outcome of the process (ability to trace the contributors of the document and the different versions). By publishing the code and the algorithms applied for the random selection (sortition) process and the data or statistics used for the stratification could give total transparency on how participants are selected.
  • Evaluation: Data collection and analysis can help researchers and policy makers assess the process (for e.g., deliberation quality, participant surveys, opinion evolution). Publishing this data in a structured and open format can allow for a broader evaluation and contribute to research. Over the course of the next year, the OECD will be preparing evaluation guidelines in accordance with the good practice principles to enable comparative data analysis.

The series will also consider how the use of emerging technologies and digital tools could complement face-to-face processes, for instance:

  • Artificial intelligence (AI) and text-based technologies (i.e. natural language processing, NLP): Could the use of AI-based tools enrich deliberative processes? For example: mapping opinion clusters, consensus building, analysis of massive inputs from external participants in the early stage of stakeholder input. Could NLP allow for simultaneous translation to other languages, feelings analysis, and automated transcription? These possibilities already exist, but raise more pertinent questions around reliability and user experience. How could they be connected to human analysis, discussion, and decision making?
  • Virtual/Augmented reality: Could the development of these emerging technologies allow participants to be immersed in virtual environments and thereby simulate face-to-face deliberation or experiences that enable and build empathy with possible futures or others’ lived experiences?…(More)”.

Global AI Ethics Consortium


About: “…The newly founded Global AI Ethics Consortium (GAIEC) on Ethics and the Use of Data and Artificial Intelligence in the Fight Against COVID-19 and other Pandemics aims to:

  1. Support immediate needs for expertise related to the COVID-19 crisis and the emerging ethical questions related to the use of AI in managing the pandemic.
  2. Create a repository that includes avenues of communication for sharing and disseminating current research, new research opportunities, and past research findings.
  3. Coordinate internal funding and research initiatives to allow for maximum opportunities to pursue vital research related to health crises and the ethical use of AI.
  4. Discuss research findings and opportunities for new areas of collaboration.

Read the Statement of Purpose and find out more about the Global AI Ethics Consortium and its founding members: Christoph Lütge (TUM Institute for Ethics in Artificial Intelligence, Technical University of Munich), Jean-Gabriel Ganascia (LIP6-CNRS, Sorbonne Université), Mark Findlay (Centre for AI and Data Governance, Law School, Singapore Management University), Ken Ito and Kan Hiroshi Suzuki (The University of Tokyo), Jeannie Marie Paterson (Centre for AI and Digital Ethics, University of Melbourne), Huw Price (Leverhulme Centre for the Future of Intelligence, University of Cambridge), Stefaan G. Verhulst (The GovLab, New York University), Yi Zeng (Research Center for AI Ethics and Safety, Beijing Academy of Artificial Intelligence), and Adrian Weller (The Allan Turing Institute).

If you or your organization is interested in the GAIEC — Global AI Ethics Consortium please contact us at ieai@mcts.tum.de…(More)”.

The Atlas of Inequality and Cuebiq’s Data for Good Initiative


Data Collaborative Case Study by Michelle Winowatan, Andrew Young, and Stefaan Verhulst: “The Atlas of Inequality is a research initiative led by scientists at the MIT Media Lab and Universidad Carlos III de Madrid. It is a project within the larger Human Dynamics research initiative at the MIT Media Lab, which investigates how computational social science can improve society, government, and companies. Using multiple big data sources, MIT Media Lab researchers seek to understand how people move in urban spaces and how that movement influences or is influenced by income. Among the datasets used in this initiative was location data provided by Cuebiq, through its Data for Good initiative. Cuebiq offers location-intelligence services to approved research and nonprofit organizations seeking to address public problems. To date, the Atlas has published maps of inequality in eleven cities in the United States. Through the Atlas, the researchers hope to raise public awareness about segregation of social mobility in United States cities resulting from economic inequality and support evidence-based policymaking to address the issue.

Data Collaborative Model: Based on the typology of data collaborative practice areas developed by The GovLab, the use of Cuebiq’s location data by MIT Media Lab researchers for the Atlas of Inequality initiative is an example of the research and analysis partnership model of data collaboration, specifically a data transfer approach. In this approach, companies provide data to partners for analysis, sometimes under the banner of “data philanthropy.” Access to data remains highly restrictive, with only specific partners able to analyze the assets provided. Approved uses are also determined in a somewhat cooperative manner, often with some agreement outlining how and why parties requesting access to data will put it to use….(More)”.

A Data Ecosystem to Defeat COVID-19


Paper by Bapon Fakhruddin: “…A wide range of approaches could be applied to understand transmission, outbreak assessment, risk communication, cascading impacts assessment on essential and other services. The network-based modelling of System of Systems (SOS), mobile technology, frequentist statistics and maximum-likelihood estimation, interactive data visualization, geostatistics, graph theory, Bayesian statistics, mathematical modelling, evidence synthesis approaches and complex thinking frameworks for systems interactions on COVID-19 impacts could be utilized. An example of tools and technologies that could be utilized to act decisively and early to prevent the further spread or quickly suppress the transmission of COVID-19, strengthen the resilience of health systems and save lives and urgent support to developing countries with businesses and corporations are shown in Figure 2. There are also WHO guidance on ‘Health Emergency and Disaster Risk Management[8]’, UNDRR supported ‘Public Health Scorecard Addendum[9]’, and other guidelines (e.g. WHO practical considerations and recommendations for religious leaders and faith-based communities in the context of COVID-19[10]) that could enhance pandemic response plan. It needs to be ensured that any such use is proportionate, specific and protected and does not increase civil liberties’ risk. It is essential therefore to examine in detail the challenge of maximising data use in emergency situations, while ensuring it is task-limited, proportionate and respectful of necessary protections and limitations. This is a complex task and the COVID-19 wil provide us with important test cases. It is also important that data is interpreted accurately. Otherwise, misinterpretations could lead each sector down to incorrect paths.

Figure 2: Tools to strengthen resilience for COVID-19

Many countries are still learning how to make use of data for their decision making in this critical time. The COVID-19 pandemic will provide important lessons on the need for cross-domain research and on how, in such emergencies, to balance the use of technological opportunities and data to counter pandemics against fundamental protections….(More)”.