Explore our articles
View All Results

Stefaan Verhulst

Paper by Tamar Sharon: “Since the outbreak of COVID-19, governments have turned their attention to digital contact tracing. In many countries, public debate has focused on the risks this technology poses to privacy, with advocates and experts sounding alarm bells about surveillance and mission creep reminiscent of the post 9/11 era. Yet, when Apple and Google launched their contact tracing API in April 2020, some of the world’s leading privacy experts applauded this initiative for its privacy-preserving technical specifications. In an interesting twist, the tech giants came to be portrayed as greater champions of privacy than some democratic governments.

This article proposes to view the Apple/Google API in terms of a broader phenomenon whereby tech corporations are encroaching into ever new spheres of social life. From this perspective, the (legitimate) advantage these actors have accrued in the sphere of the production of digital goods provides them with (illegitimate) access to the spheres of health and medicine, and more worrisome, to the sphere of politics. These sphere transgressions raise numerous risks that are not captured by the focus on privacy harms. Namely, a crowding out of essential spherical expertise, new dependencies on corporate actors for the delivery of essential, public goods, the shaping of (global) public policy by non-representative, private actors and ultimately, the accumulation of decision-making power across multiple spheres. While privacy is certainly an important value, its centrality in the debate on digital contact tracing may blind us to these broader societal harms and unwittingly pave the way for ever more sphere transgressions….(More)”.

Blind-sided by privacy? Digital contact tracing, the Apple/Google API and big tech’s newfound role as global health policy makers

Essay by Martin Tisné: “On March 17, 2018, questions about data privacy exploded with the scandal of the previously unknown consulting company Cambridge Analytica. Lawmakers are still grappling with updating laws to counter the harms of big data and AI. In the Spring of 2020, the Covid-19 pandemic brought questions about sufficient legal protections back to the public debate, with urgent warnings about the privacy implications of contact tracing apps. But the surveillance consequences of the pandemic’s aftermath are much bigger than any app: transport, education, health
systems and offices are being turned into vast surveillance networks. If we only consider individual trade-offs between privacy sacrifices and alleged health benefits, we will miss the point. The collective nature of big data means people are more impacted by other people’s data than by data about them. Like climate change, the threat is societal and personal.

In the era of big data and AI, people can suffer because of how the sum of individual data is analysed and sorted into groups by algorithms. Novel forms of collective data-driven harms are appearing as a result: online housing, job and credit ads discriminating on the basis of race and gender, women disqualified from jobs on the basis of gender and foreign actors targeting light-right groups, pulling them to the far-right.2 Our public debate, governments, and laws are ill-equipped to deal with these collective, as opposed to individual, harms….(More)”.

The Data Delusion: Protecting Individual Data is Not Enough When the Harm is Collective

Paper by Luc Soete: “…But over time the scientific comments given on TV and radio in my two home countries, the Netherlands and Belgium, as well as neighbouring Germany and France, became dominated by each country’s own, national virology and epidemiological experts explaining how their country’s approach to ‘flattening the curve’ and bringing down the reproduction rate was best, it became clear, even to a non-expert in the field like myself, that many of the science-based policies used to contain COVID-19 were first and foremost based on ‘hypotheses’. With the exception of Germany, not really on facts. And as Anthony Fauci, Director of the US National Institute of Allergy and Infectious Disease, probably the world’s most respected virologist once put it: “Data is real. The model is hypothesis.”

So at the risk of being an ultracrepidarian – an old word which has suddenly risen in popularity – it seemed appropriate to have a closer, more critical look at the science-based policy advice during this COVID-19 pandemic. For virologists and epidemiologists, the logical approach to a new, unknown but highly infectious virus such as SARS-CoV-2, spreading globally at pandemic speed, is ‘the hammer’: the tool to crush down quickly and radically through extreme measures (social distancing, confinement, lockdown, travel restrictions) the spread of the virus and get the transmission rate’s value as far as possible below. The stricter the confinement measures, the better.

For a social scientist and social science-based policy adviser, a hammer represents anything but a useful tool to approach society or the economy with. Her or his preference will rather go to measures, such as ‘nudges’ which alter people’s behaviour in a predictable way without coercion. Actually, the first COVID-19 measure was based on a typical ‘nudge’: improving hand hygiene among healthcare workers which was now enlarged to the whole population. ‘Nudging’ in the face of a new virus such as SARS-CoV-2 will consist of making sure incremental policy measures build up to a societal behavioural change, starting from hand hygiene, social distancing, to confinement and various forms of lockdown. It will be crucial to measure the additional, marginal impact of each measure in its contribution to the overall reduction in the transmission of the virus. Introducing all measures at once, as in the case of the ‘hammer’ strategy, subsequently provides little useful information on the effectiveness of each measure ( on the contrary, in fact). In a period of deconfinement, one now has little information on which measures are likely to be the most effective. From a nudge perspective, achieving a change in social behaviour with respect to physical distancing: the so-called one-and-a-half metre society, will be an essential variable and measuring its impact on the spreading of the virus crucial. One of the reasons is that full adoption of such physical distancing automatically and without the need of coercion, will prevent the occurrence of large or smaller social gatherings without authorities having to specify the rules. This is implicit in the principle of nudging: it will be the providers, the entrepreneurs of personal service sectors who will have to come up with organisational innovations enabling physical distancing in the safe delivery of such services.

Most noteworthy, however, is the purely national setting within which most virology and epidemiological science-based policy advice is currently framed. This contrasts sharply with the actual scientific research in the field which is today purely global, based on shared data and open access. For years now, epidemiological studies have taken individual countries as ‘containers’ for data collection and data analysis. It is also the national setting that provides the framework for estimating the capacity of medical facilities, especially the total number of available intensive care units needed to handle COVID-19 patients in each country. In the case of Europe and as a result, it has led to the reintroduction of internal borders which had ‘disappeared’ 25 years ago for fear of cross-border contamination. Doing so, COVID-19 has undermined the notion of European values. This policy brief is my attempt to clarify the situation….(More)”.

Hammer or nudge? New brief on international policy options for COVID-19

EU Science Hub: “This work introduces the concept of data-driven Mobility Functional Areas (MFAs) as geographic zones with a high degree of intra-mobility exchanges. Such information, calculated at European regional scale thanks to mobile data, can be useful to inform targeted re-escalation policy responses in cases of future COVID-19 outbreaks (avoiding large-area or even national lockdowns). In such events, the geographic distribution of MFAs would define territorial areas to which lockdown interventions could be limited, with the result of minimizing socio-economic consequences of such policies. The analysis of the time evolution of MFAs can also be thought of as a measure of how human mobility changes not only in intensity but also in patterns, providing innovative insights into the impact of mobility containment measures. This work presents a first analysis for 15 European countries (14 EU Member States and Norway)….(More)”.

Mapping Mobility Functional Areas (MFA) using Mobile Positioning Data to Inform COVID-19 Policies

Book by Darrell M. West and John R. Allen: “Until recently, “artificial intelligence” sounded like something out of science fiction. But the technology of artificial intelligence, AI, is becoming increasingly common, from self-driving cars to e-commerce algorithms that seem to know what you want to buy before you do. Throughout the economy and many aspects of daily life, artificial intelligence has become the transformative technology of our time.

Despite its current and potential benefits, AI is little understood by the larger public and widely feared. The rapid growth of artificial intelligence has given rise to concerns that hidden technology will create a dystopian world of increased income inequality, a total lack of privacy, and perhaps a broad threat to humanity itself.

In their compelling and readable book, two experts at Brookings discuss both the opportunities and risks posed by artificial intelligence—and how near-term policy decisions could determine whether the technology leads to utopia or dystopia.

Drawing on in-depth studies of major uses of AI, the authors detail how the technology actually works. They outline a policy and governance blueprint for gaining the benefits of artificial intelligence while minimizing its potential downsides.

The book offers major recommendations for actions that governments, businesses, and individuals can take to promote trustworthy and responsible artificial intelligence. Their recommendations include: creation of ethical principles, strengthening government oversight, defining corporate culpability, establishment of advisory boards at federal agencies, using third-party audits to reduce biases inherent in algorithms, tightening personal privacy requirements, using insurance to mitigate exposure to AI risks, broadening decision-making about AI uses and procedures, penalizing malicious uses of new technologies, and taking pro-active steps to address how artificial intelligence affects the workforce….(More)”.

Turning Point Policymaking in the Era of Artificial Intelligence

Essay by Massimo Russo and Tian Feng: “With COVID-19, the press has been leaning on IoT data as leading indicators in a time of rapid change. The Wall Street Journal and New York Times have leveraged location data from companies like TomTom, INRIX, and Cuebiq to predict economic slowdown and lockdown effectiveness.¹ Increasingly we’re seeing use cases like these, of existing data being used for new purposes and to drive new insights.² Even before the crisis, IoT data was revealing surprising insights when used in novel ways. In 2018, fitness app Strava’s exercise “heatmap” shockingly revealed locations, internal maps, and patrol routes of US military bases abroad.³

The idea of alternative data is also trending in the financial sector. Defined in finance as data from non-traditional data sources such as satellites and sensors, financial alternative data has grown from a niche tool used by select hedge funds to an investment input for large institutional investors.⁴ The sector is forecasted to grow seven-fold from 2016 to 2020, with spending nearing $2 billion.⁵ And it’s easy to see why: alternative data linked to IoT sources are able to give investors a real time, scalable view into how businesses and markets are performing.

This phenomenon of repurposing IoT data collected for one purpose for use for another purpose will extend beyond crisis or financial applications and will be focus of this article. For the purpose of our discussion, we’ll define intended data use as ones that deliver the value directly associated with the IoT application. On the other hand, alternative data use as ones linked to insights and application using the data outside of the intent of the initial IoT application.⁶ Alternative data use is important because it is incremental value outside of the original application.

Why should we think about this today? Increasingly CTOs are pursuing IoT projects with a fixed application in mind. Whereas early in IoT maturity, companies were eager to pilot the technology, now the focus has rightly shifted to IoT use cases with tangible ROI. In this environment, how should companies think about external data sharing when potential use cases are distant, unknown, or not yet existent? How can companies balance the abstract value of future use cases with the tangible risk of data misuse?…(More)”.

Surprising Alternative Uses of IoT Data

Report by the Nuffield Foundation and the Leverhulme Centre for the Future of Intelligence:” The aim of this report is to offer a broad roadmap for work on the ethical and societal implications of algorithms, data, and AI (ADA) in the coming years. It is aimed at those involved in planning, funding, and pursuing research and policy work related to these technologies. We use the term ‘ADA-based technologies’ to capture a broad range of ethically and societally relevant technologies based on algorithms, data, and AI, recognising that these three concepts are not totally separable from one another and will often overlap. A shared set of key concepts and concerns is emerging, with widespread agreement on some of the core issues (such as bias) and values (such as fairness) that an ethics of algorithms, data, and AI should focus on. Over the last two years, these have begun to be codified in various codes and sets of ‘principles’. Agreeing on these issues, values and high-level principles is an important step for ensuring that ADA-based technologies are developed and used for the benefit of society. However, we see three main gaps in this existing work: (i) a lack of clarity or consensus around the meaning of central ethical concepts and how they apply in specific situations; (ii) insufficient attention given to tensions between ideals and values; (iii) insufficient evidence on both (a) key technological capabilities and impacts, and (b) the perspectives of different publics.”….(More)”.

Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research

Report by Oliver Lough and Kerrie Holloway: “Effective communication and community engagement (CCE) is a critical component of the response to Covid-19 in humanitarian settings. CCE has a vital role to play in supporting affected people to make informed decisions, manage risk, and highlight their evolving needs and priorities.

Awareness of CCE’s centrality to the Covid-19 pandemic is already leading to a surge in funding and interest in humanitarian settings. However, careful thought is required on how to address the new challenges it poses, including reduced access to affected populations (particularly marginalised groups) and more complex coordination environments.

Collective approaches to CCE can add value in the Covid-19 response by ensuring the right actors are working in the right configuration to deliver the best results, reducing duplication while increasing effectiveness. But, to date, attempts at collective CCE have experienced a number of challenges: CCE is yet to be well-integrated into both humanitarian responses and emergency preparedness, and it is not always easy to determine what configuration of approach is the right ‘fit’ for a given crisis.

To strengthen collective approaches to CCE, this briefing note recommends that they must:

  • have well-defined objectives, a clear relationship to the rest of the response and strong links to key decision-making processes;
  • be well-resourced, supported by dedicated staff and funded in ways that support collective action;
  • be inclusive of a wide range of actors, make space for locally-driven, bottom-up approaches and foster a sense of common ownership to ensure buy-in;
  • ensure that affected populations have multiple channels for two-way dialogue that include the most marginalised….(More)”.

Covid-19: a watershed moment for collective approaches to community engagement?

Book by Stuart Ritchie: “So much relies on science. But what if science itself can’t be relied on?

Medicine, education, psychology, health, parenting – wherever it really matters, we look to science for guidance. Science Fictions reveals the disturbing flaws that undermine our understanding of all of these fields and more.

While the scientific method will always be our best and only way of knowing about the world, in reality the current system of funding and publishing science not only fails to safeguard against scientists’ inescapable biases and foibles, it actively encourages them. Many widely accepted and highly influential theories and claims – about ‘priming’ and ‘growth mindset’, sleep and nutrition, genes and the microbiome, as well as a host of drugs, allergies and therapies – turn out to be based on unreliable, exaggerated and even fraudulent papers. We can trace their influence in everything from austerity economics to the anti-vaccination movement, and occasionally count the cost of them in human lives….(More)”.

Science Fictions: Exposing Fraud, Bias, Negligence and Hype in Science

Paper by UNDP: “…This paper seeks to go beyond mere analysis of the spectrum of problems and risks we face, identifying a portfolio of possibilities (POPs) and articulating a new framework for governance and government. The purpose of these POPs is not to define the future but to challenge, to innovate, to expand the range of politically acceptable policies, and to establish a foundation for the statecraft in the age of risk and uncertainties.

As its name suggests, we recognise that the A Way Forward is and must be one of many pathways to explore the future of governance. It is the beginning of a journey; one on which you are invited to join us to help evolve the provocations into new paradigms and policy options that seek to chart an alternative pathway to governance and statecraft.

A Way Forward is a petition for seeding new transnational alliances based on shared interests and vulnerability. We believe the future will be built across a new constellation of governmental alliances, where innovation in statecraft and governance is achieved collaboratively. Our key objective is to establish a platform to host these transnational discussions, and move us towards the new capabilities that are necessary for statecraft in the age of risk and uncertainty….(More)”.

A Way Forward: Governing in an Age of Emergence

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday