Here Be Dragons – Maintaining Trust in the Technologized Public Sector


Paper by Balázs Bodó and Heleen Janssen: “Emerging technologies, such as AI systems, distributed ledgers, but also private e-commerce and telecommunication platforms have permeated every aspect of our social, economic, political relations. Various bodies of the state, from education, via law enforcement to healthcare also increasingly rely on technical components to provide cheap, efficient public services, and supposedly fair, transparent, disinterested, accountable public administration. Most of these technical components are provided by private parties who designed, developed, trained, and maintain the technical components of public infrastructures.
The rapid, and often unplanned, and uncontrolled technologization of public services (as happened, for example in the rapid adoption of distance learning and teleconferencing systems during the COVID lockdowns) inseparably link the perception of the quality, trustworthiness, effectiveness of public services and the public bodies which provision them to the successes and failures of their private, technological components: if the government’s welfare fraud AI system fails, it is the confidence in the governments which is ultimately hit.


In this contribution we explore how the use of potentially untrustworthy private technological systems in the public sector may affect the trust in government. We argue that citizens’ and business’ trust in government is a valuable asset, which came under assault from many dimensions. The increasing reliance on private technical components in government is in part a response to protect this trust, but in many cases, it opens up new forms of threats and vulnerabilities, because the trustworthiness of many of these private technical systems is, at best, questionable, particularly where it is deployed in the context of public sector trust contexts. We consider a number of policy options to protect the trust in government even if some of their technological components are fundamentally untrustworthy….(More)”.

Ethics and governance of artificial intelligence for health


The WHO guidance…”on Ethics & Governance of Artificial Intelligence for Health is the product of eighteen months of deliberation amongst leading experts in ethics, digital technology, law, human rights, as well as experts from Ministries of Health.  While new technologies that use artificial intelligence hold great promise to improve diagnosis, treatment, health research and drug development and to support governments carrying out public health functions, including surveillance and outbreak response, such technologies, according to the report, must put ethics and human rights at the heart of its design, deployment, and use.

The report identifies the ethical challenges and risks with the use of artificial intelligence of health, six consensus principles to ensure AI works to the public benefit of all countries. It also contains a set of recommendations that can ensure the governance of artificial intelligence for health maximizes the promise of the technology and holds all stakeholders – in the public and private sector – accountable and responsive to the healthcare workers who will rely on these technologies and the communities and individuals whose health will be affected by its use…(More)”

Pooling society’s collective intelligence helped fight COVID – it must help fight future crises too


Aleks Berditchevskaia and Kathy Peach at The Conversation: “A Global Pandemic Radar is to be created to detect new COVID variants and other emerging diseases. Led by the WHO, the project aims to build an international network of surveillance hubs, set up to share data that’ll help us monitor vaccine resistance, track diseases and identify new ones as they emerge.

This is undeniably a good thing. Perhaps more than any event in recent memory, the COVID pandemic has brought home the importance of pooling society’s collective intelligence and finding new ways to share that combined knowledge as quickly as possible.

At its simplest, collective intelligence is the enhanced capacity that’s created when diverse groups of people work together, often with the help of technology, to mobilise more information, ideas and knowledge to solve a problem. Digital technologies have transformed what can be achieved through collective intelligence in recent years – connecting more of us, augmenting human intelligence with machine intelligence, and helping us to generate new insights from novel sources of data.

So what have we learned over the last 18 months of collective intelligence pooling that can inform the Global Pandemic Radar? Building from the COVID crisis, what lessons will help us perfect disease surveillance and respond better to future crises?…(More)”

National strategies on Artificial Intelligence: A European perspective


Report by European Commission’s Joint Research Centre (JRC) and the OECD’s Science Technology and Innovation Directorate: “Artificial intelligence (AI) is transforming the world in many aspects. It is essential for Europe to consider how to make the most of the opportunities from this transformation and to address its challenges. In 2018 the European Commission adopted the Coordinated Plan on Artificial Intelligence that was developed together with the Member States to maximise the impact of investments at European Union (EU) and national levels, and to encourage synergies and cooperation across the EU.

One of the key actions towards these aims was an encouragement for the Member States to develop their national AI strategies.The review of national strategies is one of the tasks of AI Watch launched by the European Commission to support the implementation of the Coordinated Plan on Artificial Intelligence.

Building on the 2020 AI Watch review of national strategies, this report presents an updated review of national AI strategies from the EU Member States, Norway and Switzerland. By June 2021, 20 Member States and Norway had published national AI strategies, while 7 Member States were in the final drafting phase. Since the 2020 release of the AI Watch report, additional Member States – i.e. Bulgaria, Hungary, Poland, Slovenia, and Spain – published strategies, while Cyprus, Finland and Germany have revised the initial strategies.

This report provides an overview of national AI policies according to the following policy areas: Human capital, From the lab to the market, Networking, Regulation, and Infrastructure. These policy areas are consistent with the actions proposed in the Coordinated Plan on Artificial Intelligence and with the policy recommendations to governments contained in the OECD Recommendation on AI. The report also includes a section on AI policies to address societal challenges of the COVID-19 pandemic and climate change….(More)”.

Governance mechanisms for sharing of health data: An approach towards selecting attributes for complex discrete choice experiment studies


Paper by Jennifer Viberg Johansson: “Discrete Choice Experiment (DCE) is a well-established technique to elicit individual preferences, but it has rarely been used to elicit governance preferences for health data sharing.

The aim of this article was to describe the process of identifying attributes for a DCE study aiming to elicit preferences of citizens in Sweden, Iceland and the UK for governance mechanisms for digitally sharing different kinds of health data in different contexts.

A three-step approach was utilised to inform the attribute and level selection: 1) Attribute identification, 2) Attribute development and 3) Attribute refinement. First, we developed an initial set of potential attributes from a literature review and a workshop with experts. To further develop attributes, focus group discussions with citizens (n = 13), ranking exercises among focus group participants (n = 48) and expert interviews (n = 18) were performed. Thereafter, attributes were refined using group discussion (n = 3) with experts as well as cognitive interviews with citizens (n = 11).

The results led to the selection of seven attributes for further development: 1) level of identification, 2) the purpose of data use, 3) type of information, 4) consent, 5) new data user, 6) collector and 7) the oversight of data sharing. Differences were found between countries regarding the order of top three attributes. The process outlined participants’ conceptualisation of the chosen attributes, and what we learned for our attribute development phase.

This study demonstrates a process for selection of attributes for a (multi-country) DCE involving three stages: Attribute identification, Attribute development and Attribute refinement. This study can contribute to improve the ethical aspects and good practice of this phase in DCE studies. Specifically, it can contribute to the development of governance mechanisms in the digital world, where people’s health data are shared for multiple purposes….(More)”.

Scientific publishing’s new weapon for the next crisis: the rapid correction


Gideon Meyerowitz-Katz and James Heathers at STATNews: “If evidence of errors does emerge, the process for correcting or withdrawing a paper tends to be alarmingly long. Late last year, for example, David Cox, the IBM director of the MIT-IBM Watson AI Lab, discovered that his name was included as an author on two papers he had never written. After he wrote to the journals involved, it took almost three months for them to remove his name and the papers themselves. In cases of large-scale research fraud, correction times can be measured in years.

Imagine now that the issue with a manuscript is not a simple matter of retracting a fraudulent paper, but a more complex methodological or statistical problem that undercuts the study’s conclusions. In this context, requests for clarification — or retraction — can languish for years. The process can outlast both the tenure of the responsible editor, resetting the clock on the entire ordeal, or the journal itself can cease publication, leaving an erroneous article in the public domain without oversight, forever….

This situation must change, and change quickly. Any crisis that requires scientific information in a hurry will produce hurried science, and hurried science often includes miscalculated analyses, poor experimental design, inappropriate statistical models, impossible numbers, or even fraud. Having the agility to produce and publicize work like this without having the ability to correct it just as quickly is a curiously persistent oversight in the global scientific enterprise. If corrections occur only long after the research has already been used to treat people across the world, what use are they at all?

There are some small steps in the right direction. The open-source website PubPeer aggregates formal scientific criticism, and when shoddy research makes it into the literature, hordes of critics may leave comments and questions on the site within hours. Twitter, likewise, is often abuzz with spectacular scientific critiques almost as soon as studies go up online.

But these volunteer efforts are not enough. Even when errors are glaring and obvious, the median response from academic journals is to deal with them grudgingly or not at all. Academia in general takes a faintly disapproving tone of crowd-sourced error correction, ignoring the fact that it is often the only mechanism that exists to do this vital work.

Scientific publishing needs to stop treating error-checking as a slightly inconvenient side note and make it a core part of academic research. In a perfect world, entire departmental sections would be dedicated to making sure that published research is correct and reliable. But even a few positions would be a fine start. Young researchers could be given kudos not just for every citation in their Google scholar profile but also for every post-publication review they undertake….(More)”

Politics, Public Goods, and Corporate Nudging in the HTTP/2 Standardization Process


Paper by Sylvia E. Peacock: “The goal is to map out some policy problems attached to using a club good approach instead of a public good approach to manage our internet protocols, specifically the HTTP (Hypertext Transfer Protocol). Behavioral and information economics theory are used to evaluate the standardization process of our current generation HTTP/2 (2.0). The HTTP update under scrutiny is a recently released HTTP/2 version based on Google’s SPDY, which introduces several company-specific and best practice applications, side by side. A content analysis of email discussions extracted from a publicly accessible IETF (Internet Engineering Task Force) email server shows how the club good approach of the working group leads to an underperformance in the outcomes of the standardization process. An important conclusion is that in some areas of the IETF, standardization activities may need to include public consultations, crowdsourced volunteers, or an official call for public participation to increase public oversight and more democratically manage our intangible public goods….(More)”.

Examining the Intersection of Behavioral Science and Advocacy


Introduction to Special Collection of the Behavioral Scientist by Cintia Hinojosa and Evan Nesterak: “Over the past year, everyone’s lives have been touched by issues that intersect science and advocacy—the pandemic, climate change, police violence, voting, protests, the list goes on. 

These issues compel us, as a society and individuals, toward understanding. We collect new data, design experiments, test our theories. They also inspire us to examine our personal beliefs and values, our roles and responsibilities as individuals within society. 

Perhaps no one feels these forces more than social and behavioral scientists. As members of fields dedicated to the study of social and behavioral phenomena, they are in the unique position of understanding these issues from a scientific perspective, while also navigating their inevitable personal impact. This dynamic brings up questions about the role of scientists in a changing world. To what extent should they engage in advocacy or activism on social and political issues? Should they be impartial investigators, active advocates, something in between? 

t also raises other questions, like does taking a public stance on an issue affect scientific integrity? How should scientists interact with those setting policies? What happens when the lines between an evidence-based stance and a political position become blurred? What should scientists do when science itself becomes a partisan issue? 

To learn more about how social and behavioral scientists are navigating this terrain, we put out a call inviting them to share their ideas, observations, personal reflections, and the questions they’re grappling with. We gave them 100-250 words to share what was on their mind. Not easy for such a complex and consequential topic.

The responses, collected and curated below, revealed a number of themes, which we’ve organized into two parts….(More)”.

Sandwich Strategy


Article by the Accountability Research Center: “The “sandwich strategy” describes an interactive process in which reformers in government encourage citizen action from below, driving virtuous circles of mutual empowerment between pro-accountability actors in both state and society.

The sandwich strategy relies on mutually-reinforcing interaction between pro-reform actors in both state and society, not just initiatives from one or the other arena. The hypothesis is that when reformers in government tangibly reduce the risks/costs of collective action, that process can bolster state-society pro-reform coalitions that collaborate for change. While this process makes intuitive sense, it can follow diverse pathways and encounter many roadblocks. The dynamics, strengths and limitations of sandwich strategies have not been documented and analyzed systematically. The figure below shows a possible pathway of convergence and conflict between actors for and against change in both state and society….(More)”.

sandwich strategy

Citizens ‘on mute’ in digital public service delivery


Blog by Sarah Giest at Data and Policy: “Various countries are digitalizing their welfare system in the larger context of austerity considerations and fraud detection goals, but these changes are increasingly under scrutiny. In short, digitalization of the welfare system means that with the help of mathematical models, data and/or the combination of different administrative datasets, algorithms issue a decision on, for example, an application for social benefits (Dencik and Kaun 2020).

Several examples exist where such systems have led to unfair treatment of welfare recipients. In Europe, the Dutch SyRI system has been banned by court, due to human rights violations in the profiling of welfare recipients, and the UK has found errors in the automated processes leading to financial hardship among citizens. In the United States and Canada, automated systems led to false underpayment or denial of benefits. A recent UN report (2019) even warns that countries are ‘stumbling zombie-like into a digital welfare dystopia’. Further, studies raise alarm that this process of digitalization is done in a way that it not only creates excessive information asymmetry among government and citizens, but also disadvantages certain groups more than others.

A closer look at the Dutch Childcare Allowance case highlights this. In this example, low-income parents were regarded as fraudsters by the Tax Authorities if they had incorrectly filled out any documents. An automated and algorithm-based procedure then also singled out dual-nationality families. The victims lost their allowance without having been given any reasons. Even worse, benefits already received were reclaimed. This led to individual hardship, where financial troubles and the categorization as a fraudster by government led for citizens to a chain of events from unpaid healthcare insurance and the inability to visit a doctor to job loss, potential home loss and mental health concerns (Volkskrant 2020)….(More)”.