Blog by Luciano Floridi: “There is a lot of talk about apps to deal with the pandemic. Some of the best solutions use the Bluetooth connection of mobile phones to determine the contact between people and therefore the probability of contagion.
In theory, it’s simple. In practice, it is a minefield of ethical problems, not only technical ones. To understand them, it is useful to distinguish between the validation and the verification of a system.
The validation of a system answers the question: “are we building the right system?”. The answer is no if the app
- is illegal;
- is unnecessary, for example, there are better solutions;
- is a disproportionate solution to the problem, for example, there are only a few cases in the country;
- goes beyond the purpose for which it was designed, for example, it is used to discriminate people;
- continues to be used even after the end of the emergency.
Assuming the app passes the validation stage, then it needs to be verified.
The verification of a system answers the question: “are we building the system in the right way?”. Here too the difficulties are considerable. I have become increasingly aware of them as I collaborate with two national projects about a coronavirus app, as an advisor on their ethical implications.
For once, the difficult problem is not privacy. Of course, it is trivially true that there are and there might always be privacy issues. The point is that, in this case, they can be made much less pressing than other issues. However, once (or if you prefer, even if) privacy is taken care of, other difficulties appear to remain intractable. A Bluetooth-based app can use anonymous data, recorded only in the mobile phone, used exclusively to send alerts in case of the contact with people infected. It is not easy but it is feasible, as demonstrated by the approach adopted by the Pan-European Privacy Preserving Proximity Tracing initiative (PEPP-PT). The apparently intractable problems are the effectiveness and fairness of the app.
To be effective, an app must be adopted by many people. In Britain, I was told that it would be useless if used by less than 20% of the population. According to the PEPP-PT, real effectiveness seems to be reached around the threshold of 60% of the whole population. This means that in Italy, for example, the app should be consistently and correctly used by something between 11m to 33m people, out of a population of 55m. Consider that in 2019 Facebook Messenger was used by 23m Italians. Even the often-mentioned app TraceTogether has been downloaded by an insufficient number of people in Singapore.
Given that it is unlikely that the app will be adopted so extensively just voluntarily, out of social responsibility, and that governments are reluctant to impose it as mandatory (and rightly so, for it would be unfair, see below), it is clear that it will be necessary to encourage its use, but this only shifts the problem….
Therefore, one should avoid the risk of transforming the production of the app into a signalling process. To do so, the verification should not be severed from, but must feedback on, the validation. This means that if the verification fails so should the validation, and the whole project ought to be reconsidered. It follows that a clear deadline by when (and by whom) the whole project may be assessed (validation + verification) and in case be terminated, or improved, or even simply renewed as it is, is essential. At least this level of transparency and accountability should be in place.
An app will not save us. And the wrong app will be worse than useless, as it will cause ethical problems and potentially exacerbate health-related risks, e.g. by generating a false sense of security, or deepening the digital divide. A good app must be part of a wider strategy, and it needs to be designed to support a fair future. If this is not possible, better do something else, avoid its positive, negative and opportunity costs, and not play the political game of merely signalling that something (indeed anything) has been tried…(More)”.