It’s complicated: what the public thinks about COVID-19 technologies


Imogen Parker at Ada Lovelace Institute: “…Tools of this societal importance need to be shaped by the public. Given the technicality and complexity, that means going beyond surface-level opinions captured through polling and focus groups and creating structures to deliberate with groups of informed citizens. That’s hard to do well, and at the pace needed to keep up with policy and technology, but difficult problems are the ones that most need to be solved.

To help bring much-needed public voices into this debate at pace, we have drawn out emergent themes from three recent in-depth public deliberation projects, that can bring insight to bear on the questions of health apps and public health identity systems.

While there are no green lights, red lines – or indeed silver bullets – there are important nuances and strongly held views about the conditions that COVID-19 technologies would need to meet. The report goes into detailed lessons from the public, and I would like to add to those by drawing out here aspects that are consistently under-addressed in discussions I’ve heard about these tools in technology and policy circles.

  1. Trust isn’t just about data or privacy. The technology must be effective – and be seen to be effective. Too often, debates about public acceptability lapse into flawed and tired arguments about privacy vs public health; or citizens’ trust in a technology being confused with reassurances about data protection or security frameworks against malicious actors. First and foremost people need to trust the technology works – they need to trust that it can solve a problem, that it won’t fail, and it can be relied on. The public discussion must be about the outcome of the technology – not just its function. This is particularly vital in the context of public health, which affects everyone in society.
  2. Any application linked to identity is seen as high-stakes. Identity matters and is complex – and there is anxiety about the creation of technological systems that put people in pre-defined boxes or establishes static categories as the primary mechanisms by which they are known, recognised and seen. Proportionality (while not expressed as such) runs deep in public consciousness and any intrusion will require justification, not simply a rallying call for people to do their duty.
  3. Tools must proactively protect against harm. Mechanisms for challenge or redress need to be built around the app – and indeed be seen as part of the technology. This means that legitimate fears that discrimination or prejudice will arise must be addressed head on, and lower uptake from potentially disadvantaged groups that may legitimately mistrust surveillance systems must be acknowledged and mitigated.
  4. Apps will be judged as part of the system they are embedded into. The whole system must be trustworthy, not just the app or technology – and that encompasses those who develop and deploy it and those who will use it out in the world. An app – however technically perfect – can still be misused by rogue employers, or mistrusted through fear of government overreach or scope creep.
  5. Tools are seen by the public as political and social. Technology developers need to understand that they are shifting the social-political fabric of society during a crisis, and potentially beyond. Tech cannot be decoupled or isolated from questions of the nature of the society it will shape – solidaristic or individualistic; divisive or inclusive….(More)”.