Beyond pilots: sustainable implementation of AI in public services


Report by AI Watch: “Artificial Intelligence (AI) is a peculiar case of General Purpose Technology that differs from other examples in history because it embeds specific uncertainties or ambiguous character that may lead to a number of risks when used to support transformative solutions in the public sector. AI has extremely powerful and, in many cases, disruptive effects on the internal management, decision-making and service provision processes of public administration….

This document first introduces the concept of AI appropriation in government, seen as a sequence of two logically distinct phases, respectively named adoption and implementation of related technologies in public services and processes. Then, it analyses the situation of AI governance in the US and China and contrasts it to an emerging, truly European model, rooted in a systemic vision and with an emphasis on the revitalised role of the member states in the EU integration process, Next, it points out some critical challenges to AI implementation in the EU public sector, including: the generation of a critical mass of public investments, the availability of widely shared and suitable datasets, the improvement of AI literacy and skills in the involved staff, and the threats associated with the legitimacy of decisions taken by AI algorithms alone. Finally, it draws a set of common actions for EU decision-makers willing to undertake the systemic approach to AI governance through a more advanced equilibrium between AI promotion and regulation.

The three main recommendations of this work include a more robust integration of AI with data policies, facing the issue of so-called “explainability of AI” (XAI), and broadening the current perspectives of both Pre-Commercial Procurement (PCP) and Public Procurement of Innovation (PPI) at the service of smart AI purchasing by the EU public administration. These recommendations will represent the baseline for a generic implementation roadmap for enhancing the use and impact of AI in the European public sector….(More)”.

End State: 9 Ways Society is Broken – and how we can fix it


Book by James Plunkett: “As the shockwaves of Covid 19 continue to spread, and as the smoke clears from a year of anger and unrest, many people feel forlorn about the future.

In End State, James Plunkett argues that this can be a moment not of despair, but of historic opportunity – a chance to rethink, renew, and reform some of the most fundamental ways we organise society. In much the same way as societies emerged stronger from crises in the past – building the state as we know it today – we too can build a happier future.

James Plunkett has spent his career thinking laterally about the complicated relationships between individuals and the state. First as an advisor to Gordon Brown, then a leading economic researcher and writer, and then in the charity sector, helping people struggling at the front-line of economic change. James combines a deep understanding of social issues with an appreciation of how change is playing out not in the ivory tower, but in the reality of people’s lives.

Now, in his first book, he sets out an optimistic vision, exploring nine ways in which our social settlement can be upgraded to harness the power of the digital age. Covering a dizzying sweep of geography and history, from London’s 18th Century sewage systems to the uneasy inequality of Silicon Valley, it’s a thrilling and iconoclastic account of how society can not only survive, but thrive, in the digital age.

End State provides a much-needed map to help us navigate our way over the curious terrain of the twenty-first century…(More)”.

Open data in digital strategies against COVID-19: the case of Belgium


Paper by Robert Viseur: “COVID-19 has highlighted the importance of digital in the fight against the pandemic (control at the border, automated tracing, creation of databases…). In this research, we analyze the Belgian response in terms of open data. First, we examine the open data publication strategy in Belgium (a federal state with a sometimes complex functioning, especially in health), second, we conduct a case study (anatomy of the pandemic in Belgium) in order to better understand the strengths and weaknesses of the main COVID-19 open data repository. And third, we analyze the obstacles to open data publication. Finally, we discuss the Belgian COVID-19 open data strategy in terms of data availability, data relevance and knowledge management. In particular, we show how difficult it is to optimize the latter in order to make the best use of governmental, private and academic open data in a way that has a positive impact on public health policy….(More)”.

The fight against disinformation and the right to freedom of expression


Report of the European Union: This study, commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of the LIBE Committee, aims at finding the balance between regulatory measures to tackle disinformation and the protection of freedom of expression. It explores the European legal framework and analyses the roles of all stakeholders in the information landscape. The study offers recommendations to reform the attention-based, data-driven information landscape and regulate platforms’ rights and duties relating to content moderation…(More)”.

EU-US Trade and Technology Council: Commission launches consultation platform for stakeholder’s involvement to shape transatlantic cooperation


Press Release: “Today, the Commission launched an online consultation platform on the EU-US Trade and Technology Council (TTC), allowing stakeholders to share their views and provide common proposals on the work ahead.

Following their first meeting in Pittsburgh last month, representatives of the European Union and the United States agreed on the importance of and commitment to consulting closely with diverse stakeholders on both sides of the Atlantic on their coordinated approaches to key global technology, economic, and trade issues. It is in this context that the Commission has set up a one-stop-shop on its online “Futurium” platform, to collect input from all interested parties relating to the TTC.

Businesses, think tanks, labour, non-profit and environmental organisations, academics, and other parties that form the civil society at large are invited to contribute, as essential actors to successful EU-US cooperation. The platform is open to everyone after a simple registration. It allows interested parties to have their voice heard in the work of the ten specific TTC Working Groups. Via this website, they can not only feed in their views, but also receive important information and updates on the progress of the different working groups…(More)“.

Building the Behavior Change Toolkit: Designing and Testing a Nudge and a Boost


Blog by Henrico van Roekel, Joanne Reinhard, and Stephan Grimmelikhuijsen: “Changing behavior is challenging, so behavioral scientists and designers better have a large toolkit. Nudges—subtle changes to the choice environment that don’t remove options or offer a financial incentive—are perhaps the most widely used tool. But they’re not the only tool.

More recently, researchers have advocated a different type of behavioral intervention: boosting. In contrast to nudges, which aim to change behavior through changing the environment, boosts aim to empower individuals to better exert their own agency.

Underpinning each approach are different perspectives on how humans deal with bounded rationality—the idea that we don’t always behave in a way that aligns with our intentions because our decision-making is subject to biases and flaws.

A nudge approach generally assumes that bounded rationality is a constant, a fact of life. Therefore, to change behavior we best change the decision environment (the so-called choice architecture) to gently guide people into the desired direction. Boosting holds that bounded rationality is malleable and people can learn how to overcome their cognitive pitfalls. Therefore, to change behavior we must focus on the decision maker and increasing their agency.

In practice, a nudge and a boost can look quite similar, as we describe below. But their theoretical distinctions are important and useful for behavioral scientists and designers working on behavior change interventions, as each approach has pros and cons. For instance, one criticism of nudging is the paternalism part of Thaler and Sunstein’s “libertarian paternalism,” as some worry nudges remove autonomy of decision makers (though the extent to which nudges are paternalistic, and the extent to which this is solvable, are debated). Additionally, if the goal of an intervention isn’t just to change behavior but to change the cognitive process of the individual, then nudges aren’t likely to be the best tool. Boosts, in contrast, require some motivation and teachability on the part of the boostee, so there may well be contexts unfit for boosting interventions where nudges come in handy….(More)”.

Beyond the individual: governing AI’s societal harm


Paper by Nathalie A. Smuha: “In this paper, I distinguish three types of harm that can arise in the context of artificial intelligence (AI): individual harm, collective harm and societal harm. Societal harm is often overlooked, yet not reducible to the two former types of harm. Moreover, mechanisms to tackle individual and collective harm raised by AI are not always suitable to counter societal harm. As a result, policymakers’ gap analysis of the current legal framework for AI not only risks being incomplete, but proposals for new legislation to bridge these gaps may also inadequately protect societal interests that are adversely impacted by AI. By conceptualising AI’s societal harm, I argue that a shift in perspective is needed beyond the individual, towards a regulatory approach of AI that addresses its effects on society at large. Drawing on a legal domain specifically aimed at protecting a societal interest—environmental law—I identify three ‘societal’ mechanisms that EU policymakers should consider in the context of AI. These concern (1) public oversight mechanisms to increase accountability, including mandatory impact assessments with the opportunity to provide societal feedback; (2) public monitoring mechanisms to ensure independent information gathering and dissemination about AI’s societal impact; and (3) the introduction of procedural rights with a societal dimension, including a right to access to information, access to justice, and participation in public decision-making on AI, regardless of the demonstration of individual harm. Finally, I consider to what extent the European Commission’s new proposal for an AI regulation takes these mechanisms into consideration, before offering concluding remarks….(More)”.

Citizen Science Project Builder 2.0


About by Citizen Science Center Zurich: “The Citizen Science Project Builder allows the implementation of Citizen Science projects, specifically in the area of data analysis. In such projects volunteers (“citizens”) collaborate with researchers in different kinds of scientific endeavors, from labeling images of snakes to transcribing handwritten Swiss German dialect, or classifying insects and plants. The Project Builder facilitates the implementation of such projects, supporting the collaborative analysis of large sets of digital data, including images and texts (i.e. satellite pictures, social media posts, etc.), as well as videos, audios, and scanned documents.

What makes the tool so special?

The Citizen Science Project Builder features a web interface that requires limited technical knowledge, and ideally little or no coding skills. It is a simple modular “step-by-step” system where a project can be created in just a few clicks. Once the project is set up, many people can easily be involved and start contributing to the analysis of data as well as providing feedback that will help you to improve your project!

What is new?

The new release of the Citizen Science Project Builder allows the building of full-fledged questionnaires for media analysis (including conditions and multiple formats for questions). A brand new functionality allows the geolocation of content on Open Street Map (e.g. mark the location of the content of an image) and also the delimitation of an area of interest (e.g. delimitate green areas). The interface still includes an “expert path” for developers, so if you can code (vue.js) the sky is the limit!…(More)”

Consumers Are Becoming Wise to Your Nudge


Article by Simon Shaw: “The broader question, one essential to both academics and practitioners, is how a world saturated with behavioral interventions might no longer resemble the one in which those interventions were first studied. Are we aiming at a moving target?

This was the basis for a research project we completed in February 2019 examining reactions of the British public to a range of behavioral interventions. We took a nationally representative sample of 2,102 British adults, and undertook an experimental evaluation of some of marketers’ most commonly used tactics.

We started by asking participants to consider a hypothetical scenario: using a hotel booking website to find a room to stay in the following week. We then showed a series of nine real-world scarcity and social proof claims made by an unnamed hotel booking website.

Two thirds of the British public (65 percent) interpreted examples of scarcity and social proof claims used by hotel booking websites as sales pressure. Half said they were likely to distrust the company as a result of seeing them (49 percent). Just one in six (16 percent) said they believed the claims. 

The results surprised us. We had expected there to be cynicism among a subgroup—perhaps people who booked hotels regularly, for example. The verbatim commentary from participants showed people see scarcity and social proof claims frequently online, most commonly in the travel, retail, and fashion sectors. They questioned truth of these ads, but were resigned to their use:

“It’s what I’ve seen often on hotel websites—it’s what they do to tempt you.”

“Have seen many websites do this kind of thing so don’t really feel differently when I do see it.”

In a follow up question, a third (34 percent) expressed a negative emotional reaction to these messages, choosing words like contempt and disgust from a precoded list. Crucially, this was because they ascribed bad intentions to the website. The messages were, in their view, designed to induce anxiety:

 “… almost certainly fake to try and panic you into buying without thinking.”

“I think this type of thing is to pressure you into booking for fear of losing out and not necessarily true.”

For these people, not only are these behavioral interventions not working but they’re having the reverse effect. We hypothesize psychological reactance is at play: people kick back when they feel they are being coerced….(More)”.

How do we ensure anonymisation is effective?


Chapter by the Information Commissioner’s Office (UK): “Effective anonymisation reduces identifiability risk to a sufficiently remote level.
• Identifiability is about whether someone is “identified or identifiable”. This doesn’t just concern someone’s name, but other information and factors that can distinguish them from someone else.
• Identifiability exists on a spectrum, where the status of information can change depending on the circumstances of its processing.
• When assessing whether someone is identifiable, you need to take account of the “means reasonably likely to be used”. You should base this on objective factors such as the costs and time required to identify, the available technologies, and the state of technological development over time.
• However, you do not need to take into account any purely hypothetical or theoretical chance of identifiability. The key is what is reasonably likely relative to the circumstances, not what is conceivably likely in absolute.
• You also need to consider both the information itself as well as the environment in which it is processed. This will be impacted by the type of data release (to the public, to a defined group, etc) and the status of the information in the other party’s hands.
• When considering releasing anonymous information to the world at large, you may have to implement more robust techniques to achieve effective anonymisation than when releasing to particular groups or individual organisations.
• There are likely to be many borderline cases where you need to use careful judgement based on the specific circumstances of the case.
• Applying a “motivated intruder” test is a good starting point to consider identifiability risk.
• You should review your risk assessments and decision-making processes at appropriate intervals. The appropriate time for, and frequency of, any reviews depends on the circumstances…(More)”.