Explore our articles
View All Results

Stefaan Verhulst

Blog by Luciano Floridi: “There is a lot of talk about apps to deal with the pandemic. Some of the best solutions use the Bluetooth connection of mobile phones to determine the contact between people and therefore the probability of contagion.

In theory, it’s simple. In practice, it is a minefield of ethical problems, not only technical ones. To understand them, it is useful to distinguish between the validation and the verification of a system. 
The validation of a system answers the question: “are we building the right system?”. The answer is no if the app

  • is illegal;
  • is unnecessary, for example, there are better solutions; 
  • is a disproportionate solution to the problem, for example, there are only a few cases in the country; 
  • goes beyond the purpose for which it was designed, for example, it is used to discriminate people; 
  • continues to be used even after the end of the emergency.

Assuming the app passes the validation stage, then it needs to be verified.
The verification of a system answers the question: “are we building the system in the right way?”. Here too the difficulties are considerable. I have become increasingly aware of them as I collaborate with two national projects about a coronavirus app, as an advisor on their ethical implications. 
For once, the difficult problem is not privacy. Of course, it is trivially true that there are and there might always be privacy issues. The point is that, in this case, they can be made much less pressing than other issues. However, once (or if you prefer, even if) privacy is taken care of, other difficulties appear to remain intractable. A Bluetooth-based app can use anonymous data, recorded only in the mobile phone, used exclusively to send alerts in case of the contact with people infected. It is not easy but it is feasible, as demonstrated by the approach adopted by the Pan-European Privacy Preserving Proximity Tracing initiative (PEPP-PT). The apparently intractable problems are the effectiveness and fairness of the app.

To be effective, an app must be adopted by many people. In Britain, I was told that it would be useless if used by less than 20% of the population. According to the PEPP-PT, real effectiveness seems to be reached around the threshold of 60% of the whole population. This means that in Italy, for example, the app should be consistently and correctly used by something between 11m to 33m people, out of a population of 55m. Consider that in 2019 Facebook Messenger was used by 23m Italians. Even the often-mentioned app TraceTogether has been downloaded by an insufficient number of people in Singapore.


Given that it is unlikely that the app will be adopted so extensively just voluntarily, out of social responsibility, and that governments are reluctant to impose it as mandatory (and rightly so, for it would be unfair, see below), it is clear that it will be necessary to encourage its use, but this only shifts the problem….

Therefore, one should avoid the risk of transforming the production of the app into a signalling process. To do so, the verification should not be severed from, but must feedback on, the validation. This means that if the verification fails so should the validation, and the whole project ought to be reconsidered. It follows that a clear deadline by when (and by whom) the whole project may be assessed (validation + verification) and in case be terminated, or improved, or even simply renewed as it is, is essential. At least this level of transparency and accountability should be in place.

An app will not save us. And the wrong app will be worse than useless, as it will cause ethical problems and potentially exacerbate health-related risks, e.g. by generating a false sense of security, or deepening the digital divide. A good app must be part of a wider strategy, and it needs to be designed to support a fair future. If this is not possible, better do something else, avoid its positive, negative and opportunity costs, and not play the political game of merely signalling that something (indeed anything) has been tried…(More)”.

Mind the app – considerations on the ethical risks of COVID-19 apps

Blog by Andrew J. Zahuranec and Stefaan G. Verhulst: “The novel coronavirus disease (COVID-19) is a global health crisis the likes of which the modern world has never seen. Amid calls to action from the United Nations Secretary-General, the World Health Organization, and many national governments, there has been a proliferation of initiatives using data to address some facet of the pandemic. In March, The GovLab at NYU put out its own call to action, which identifies key steps organizations and decision-makers can take to build the data infrastructure needed to tackle pandemics. This call has been signed by over 400 data leaders from around the world in the public and private sector and in civil society.

But questions remain as to how many of these initiatives are useful for decision-makers. While The GovLab’s living repository contains over 160 data collaboratives, data competitions, and other innovative work, many of these examples take a data supply-side approach to the COVID-19 response. Given the urgency of the situation, some organizations create projects that align with the available data instead of trying to understand what insights those responding to the crisis actually want, including issues that may not be directly related to public health.

We need to identify and ask better questions to use data effectively in the current crisis. Part of that work means understanding what topics can be addressed through enhanced data access and analysis.

Using The GovLab’s rapid-research methodology, we’ve compiled a list of 12 topic areas related to COVID-19 where data and analysis is needed. …(More)”.

Mapping how data can help address COVID-19

Julia Stoyanovich, Jay J. Van Bavel & Tessa V. West at Nature: “As artificial intelligence becomes prevalent in society, a framework is needed to connect interpretability and trust in algorithm-assisted decisions, for a range of stakeholders.

We are in the midst of a global trend to regulate the use of algorithms, artificial intelligence (AI) and automated decision systems (ADS). As reported by the One Hundred Year Study on Artificial Intelligence: “AI technologies already pervade our lives. As they become a central force in society, the field is shifting from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy.” Major cities, states and national governments are establishing task forces, passing laws and issuing guidelines about responsible development and use of technology, often starting with its use in government itself, where there is, at least in theory, less friction between organizational goals and societal values.

In the United States, New York City has made a public commitment to opening the black box of the government’s use of technology: in 2018, an ADS task force was convened, the first of such in the nation, and charged with providing recommendations to New York City’s government agencies for how to become transparent and accountable in their use of ADS. In a 2019 report, the task force recommended using ADS where they are beneficial, reduce potential harm and promote fairness, equity, accountability and transparency2. Can these principles become policy in the face of the apparent lack of trust in the government’s ability to manage AI in the interest of the public? We argue that overcoming this mistrust hinges on our ability to engage in substantive multi-stakeholder conversations around ADS, bringing with it the imperative of interpretability — allowing humans to understand and, if necessary, contest the computational process and its outcomes.

Remarkably little is known about how humans perceive and evaluate algorithms and their outputs, what makes a human trust or mistrust an algorithm3, and how we can empower humans to exercise agency — to adopt or challenge an algorithmic decision. Consider, for example, scoring and ranking — data-driven algorithms that prioritize entities such as individuals, schools, or products and services. These algorithms may be used to determine credit worthiness, and desirability for college admissions or employment. Scoring and ranking are as ubiquitous and powerful as they are opaque. Despite their importance, members of the public often know little about why one person is ranked higher than another by a résumé screening or a credit scoring tool, how the ranking process is designed and whether its results can be trusted.

As an interdisciplinary team of scientists in computer science and social psychology, we propose a framework that forms connections between interpretability and trust, and develops actionable explanations for a diversity of stakeholders, recognizing their unique perspectives and needs. We focus on three questions (Box 1) about making machines interpretable: (1) what are we explaining, (2) to whom are we explaining and for what purpose, and (3) how do we know that an explanation is effective? By asking — and charting the path towards answering — these questions, we can promote greater trust in algorithms, and improve fairness and efficiency of algorithm-assisted decision making…(More)”.

The imperative of interpretable machines

Common EU Toolbox for Member States by eHealth Network: “Mobile apps have potential to bolster contact tracing strategies to contain and reverse the spread of COVID-19. EU Member States are converging towards effective app solutions that minimise the processing of personal data, and recognise that interoperability between these apps can support public health authorities and support the reopening of the EU’s internal borders.

This first iteration of a common EU toolbox, developed urgently and collaboratively by the e-Health Network with the support of the European Commission, provides a practical guide for Member States. The common approach aims to exploit the latest privacy-enhancing technological solutions that enable at-risk individuals to be contacted and, if necessarily, to be tested as quickly as possible, regardless of where she is and the app she is using. It explains the essential requirements for national apps, namely that they be:

  • voluntary;
  • approved by the national health authority;
  • privacy-preserving – personal data is securely encrypted; and
  • dismantled as soon as no longer needed.

The added value of these apps is that they can record contacts that a person may not notice or remember. These requirements on how to record contacts and notify individuals are anchored in accepted epidemiological guidance, and reflect best practice on cybersecurity, and accessibility. They cover how to prevent the appearance of potentially harmful unapproved apps, success criteria and collectively monitoring the effectiveness of the apps, and the outline of a communications strategy to engage with stakeholders and the people affected by these initiatives.

Work will continue urgently to develop further and implement the toolbox, as set out in the Commission Recommendation of 8 April, including addressing other types of apps and the use of mobility data for modelling to understand the spread of the disease and exit from the crisis….(More)”.

Mobile applications to support contact tracing in the EU’s fight against COVID-19

UN DESA Policy Brief: “…Involving civil society organizations, businesses, social entrepreneurs and the general public in managing the COVID-19 pandemic and its aftermath can prove to be highly effective for policy- and decision-makers. Online engagement initiatives led by governments can help people cope with the crisis as well as improve government operations. In a crisis situation, it becomes more important than ever to reach out to vulnerable groups in society, respond to their needs and ensure social stability. Engaging with civil society allows governments to tackle socio-economic challenges in a more productive way that leaves no one behind….

Since the crisis has put public services under stress, governments are urged to deploy effective digital technologies to contain the outbreak. Most innovative quick-to-market solutions have stemmed from the private sector. However, the crisis has exposed the need for government leadership in the development and adoption of new technologies such as artificial intelligence (AI) and robotics to ensure an effective provision of public services…

The efforts in developing digital government strategies after the COVID-19 crisis should focus on improving data protection and digital inclusion policies as well as on strengthening the policy and technical capabilities of public institutions. Even though public-private partnerships are essential for implementing innovative technologies, government leadership, strong institutions and effective public policies are crucial to tailor digital solutions to countries’ needs as well as prioritize security, equity and the protection of people’s rights. The COVID-19 pandemic has emphasized the importance of technology, but also the pivotal role of an effective, inclusive and accountable government….(More)”.

Embracing digital government during the pandemic and beyond

 Claudia Chwalisz at the OECD: “As part of our work on Innovative Citizen Participation, we’ve launched a series of articles to open a discussion and gather evidence on the use of digital tools and practices in representative deliberative processes. ….The current context is obliging policy makers and practitioners to think outside the box and adapt to the inability of physical deliberation. How can digital tools allow planned or ongoing processes like Citizens’ Assemblies to continue, ensuring that policy makers can still garner informed citizen recommendations to inform their decision making? New experiments are getting underway, and the evidence gathered could also be applied to other situations when face-to-face is not possible or more difficult like international processes or any situation that prevents physical gathering.

This series will cover the core phases that a representative deliberative process should follow, as established in the forthcoming OECD report: learning, deliberation, decision making, and collective recommendations. Due to the different nature of conducting a process online, we will additionally consider a phase required before learning: skills training. The articles will explore the use of digital tools at each phase, covering questions about the appropriate tools, methods, evidence, and limitations.

They will also consider how the use of certain digital tools could enhance good practice principles such as impact, transparency, and evaluation:

  • Impact: Digital tools can help participants and the public to better monitor the status of the proposed recommendations and the impact they had on final decision- making. A parallel can be drawn with the extensive use of this methodology by the United Nations for the monitoring and evaluation of the impact of the Sustainable Development Goals (SDGs).
  • Transparency: Digital tools can facilitate transparency across the process. The use of collaborative tools allows for transparency regarding who wrote the final outcome of the process (ability to trace the contributors of the document and the different versions). By publishing the code and the algorithms applied for the random selection (sortition) process and the data or statistics used for the stratification could give total transparency on how participants are selected.
  • Evaluation: Data collection and analysis can help researchers and policy makers assess the process (for e.g., deliberation quality, participant surveys, opinion evolution). Publishing this data in a structured and open format can allow for a broader evaluation and contribute to research. Over the course of the next year, the OECD will be preparing evaluation guidelines in accordance with the good practice principles to enable comparative data analysis.

The series will also consider how the use of emerging technologies and digital tools could complement face-to-face processes, for instance:

  • Artificial intelligence (AI) and text-based technologies (i.e. natural language processing, NLP): Could the use of AI-based tools enrich deliberative processes? For example: mapping opinion clusters, consensus building, analysis of massive inputs from external participants in the early stage of stakeholder input. Could NLP allow for simultaneous translation to other languages, feelings analysis, and automated transcription? These possibilities already exist, but raise more pertinent questions around reliability and user experience. How could they be connected to human analysis, discussion, and decision making?
  • Virtual/Augmented reality: Could the development of these emerging technologies allow participants to be immersed in virtual environments and thereby simulate face-to-face deliberation or experiences that enable and build empathy with possible futures or others’ lived experiences?…(More)”.
How can digital tools support deliberation?

About: “…The newly founded Global AI Ethics Consortium (GAIEC) on Ethics and the Use of Data and Artificial Intelligence in the Fight Against COVID-19 and other Pandemics aims to:

  1. Support immediate needs for expertise related to the COVID-19 crisis and the emerging ethical questions related to the use of AI in managing the pandemic.
  2. Create a repository that includes avenues of communication for sharing and disseminating current research, new research opportunities, and past research findings.
  3. Coordinate internal funding and research initiatives to allow for maximum opportunities to pursue vital research related to health crises and the ethical use of AI.
  4. Discuss research findings and opportunities for new areas of collaboration.

Read the Statement of Purpose and find out more about the Global AI Ethics Consortium and its founding members: Christoph Lütge (TUM Institute for Ethics in Artificial Intelligence, Technical University of Munich), Jean-Gabriel Ganascia (LIP6-CNRS, Sorbonne Université), Mark Findlay (Centre for AI and Data Governance, Law School, Singapore Management University), Ken Ito and Kan Hiroshi Suzuki (The University of Tokyo), Jeannie Marie Paterson (Centre for AI and Digital Ethics, University of Melbourne), Huw Price (Leverhulme Centre for the Future of Intelligence, University of Cambridge), Stefaan G. Verhulst (The GovLab, New York University), Yi Zeng (Research Center for AI Ethics and Safety, Beijing Academy of Artificial Intelligence), and Adrian Weller (The Allan Turing Institute).

If you or your organization is interested in the GAIEC — Global AI Ethics Consortium please contact us at ieai@mcts.tum.de…(More)”.

Global AI Ethics Consortium

Data Collaborative Case Study by Michelle Winowatan, Andrew Young, and Stefaan Verhulst: “The Atlas of Inequality is a research initiative led by scientists at the MIT Media Lab and Universidad Carlos III de Madrid. It is a project within the larger Human Dynamics research initiative at the MIT Media Lab, which investigates how computational social science can improve society, government, and companies. Using multiple big data sources, MIT Media Lab researchers seek to understand how people move in urban spaces and how that movement influences or is influenced by income. Among the datasets used in this initiative was location data provided by Cuebiq, through its Data for Good initiative. Cuebiq offers location-intelligence services to approved research and nonprofit organizations seeking to address public problems. To date, the Atlas has published maps of inequality in eleven cities in the United States. Through the Atlas, the researchers hope to raise public awareness about segregation of social mobility in United States cities resulting from economic inequality and support evidence-based policymaking to address the issue.

Data Collaborative Model: Based on the typology of data collaborative practice areas developed by The GovLab, the use of Cuebiq’s location data by MIT Media Lab researchers for the Atlas of Inequality initiative is an example of the research and analysis partnership model of data collaboration, specifically a data transfer approach. In this approach, companies provide data to partners for analysis, sometimes under the banner of “data philanthropy.” Access to data remains highly restrictive, with only specific partners able to analyze the assets provided. Approved uses are also determined in a somewhat cooperative manner, often with some agreement outlining how and why parties requesting access to data will put it to use….(More)”.

The Atlas of Inequality and Cuebiq’s Data for Good Initiative

MIT Sloan: “On February 19 in the Ukrainian town of Novi Sanzhary, alarm went up regarding the new coronavirus and COVID-19, the disease it causes. “50 infected people from China are being brought to our sanitarium,” began a widely read post on the messaging app Viber. “We can’t afford to let them destroy our population, we must prevent countless deaths. People, rise up. We all have children!!!”

Soon after came another message: “if we sleep this night, then we will wake up dead.”

Citizens mobilized. Roads were barricaded. Tensions escalated. Riots broke out, ultimately injuring nine police officers and leading to the arrests of 24 people. Later, word emerged that the news was false.

As the director-general of the World Health Organization recently put it, “we’re not just fighting an epidemic; we’re fighting an infodemic.”

Now a new study suggests that an “accuracy nudge” from social media networks could curtail the spread of misinformation about COVID-19. The working paper, from researchers at MIT Sloan and the University of Regina, examines how and why misinformation about COVID-19 spreads on social media. The researchers also examine a simple intervention that could slow this spread. (The paper builds on prior work about how misinformation diffuses online.)…(More)”.

Accuracy nudge’ could curtail COVID-19 misinformation online

Paper by Bapon Fakhruddin: “…A wide range of approaches could be applied to understand transmission, outbreak assessment, risk communication, cascading impacts assessment on essential and other services. The network-based modelling of System of Systems (SOS), mobile technology, frequentist statistics and maximum-likelihood estimation, interactive data visualization, geostatistics, graph theory, Bayesian statistics, mathematical modelling, evidence synthesis approaches and complex thinking frameworks for systems interactions on COVID-19 impacts could be utilized. An example of tools and technologies that could be utilized to act decisively and early to prevent the further spread or quickly suppress the transmission of COVID-19, strengthen the resilience of health systems and save lives and urgent support to developing countries with businesses and corporations are shown in Figure 2. There are also WHO guidance on ‘Health Emergency and Disaster Risk Management[8]’, UNDRR supported ‘Public Health Scorecard Addendum[9]’, and other guidelines (e.g. WHO practical considerations and recommendations for religious leaders and faith-based communities in the context of COVID-19[10]) that could enhance pandemic response plan. It needs to be ensured that any such use is proportionate, specific and protected and does not increase civil liberties’ risk. It is essential therefore to examine in detail the challenge of maximising data use in emergency situations, while ensuring it is task-limited, proportionate and respectful of necessary protections and limitations. This is a complex task and the COVID-19 wil provide us with important test cases. It is also important that data is interpreted accurately. Otherwise, misinterpretations could lead each sector down to incorrect paths.

Figure 2: Tools to strengthen resilience for COVID-19

Many countries are still learning how to make use of data for their decision making in this critical time. The COVID-19 pandemic will provide important lessons on the need for cross-domain research and on how, in such emergencies, to balance the use of technological opportunities and data to counter pandemics against fundamental protections….(More)”.

A Data Ecosystem to Defeat COVID-19

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday