Suspicion Machines


Lighthouse Reports: “Governments all over the world are experimenting with predictive algorithms in ways that are largely invisible to the public. What limited reporting there has been on this topic has largely focused on predictive policing and risk assessments in criminal justice systems. But there is an area where even more far-reaching experiments are underway on vulnerable populations with almost no scrutiny.

Fraud detection systems are widely deployed in welfare states ranging from complex machine learning models to crude spreadsheets. The scores they generate have potentially life-changing consequences for millions of people. Until now, public authorities have typically resisted calls for transparency, either by claiming that disclosure would increase the risk of fraud or to protect proprietary technology.

The sales pitch for these systems promises that they will recover millions of euros defrauded from the public purse. And the caricature of the benefit cheat is a modern take on the classic trope of the undeserving poor and much of the public debate in Europe — which has the most generous welfare states — is intensely politically charged.

The true extent of welfare fraud is routinely exaggerated by consulting firms, who are often the algorithm vendors, talking it up to near 5 percent of benefits spending while some national auditors’ offices estimate it at between 0.2 and 0.4 of spending. Distinguishing between honest mistakes and deliberate fraud in complex public systems is messy and hard.

When opaque technologies are deployed in search of political scapegoats the potential for harm among some of the poorest and most marginalised communities is significant.

Hundreds of thousands of people are being scored by these systems based on data mining operations where there has been scant public consultation. The consequences of being flagged by the “suspicion machine” can be drastic, with fraud controllers empowered to turn the lives of suspects inside out…(More)”.

The Expanding Use of Technology to Manage Migration


Report by ​Marti Flacks , Erol Yayboke , Lauren Burke and Anastasia Strouboulis: “Seeking to manage growing flows of migrants, the United States and European Union have dramatically expanded their engagement with migration origin and transit countries. This increasingly includes supporting the deployment of sophisticated technology to understand, monitor, and influence the movement of people across borders, expanding the spheres of interest to include the movement of people long before they reach U.S. and European borders.

This report from the CSIS Human Rights Initiative and CSIS Project on Fragility and Mobility examines two case studies of migration—one from Central America toward the United States and one from West and North Africa toward Europe—to map the use and export of migration management technologies and the associated human rights risks. Authors Marti Flacks, Erol Yayboke, Lauren Burke, and Anastasia Strouboulis provide recommendations for origin, transit, and destination governments on how to incorporate human rights considerations into their decisionmaking on the use of technology to manage migration…(More)”.

Examining public views on decentralised health data sharing


Paper by Victoria Neumann et al: “In recent years, researchers have begun to explore the use of Distributed Ledger Technologies (DLT), also known as blockchain, in health data sharing contexts. However, there is a significant lack of research that examines public attitudes towards the use of this technology. In this paper, we begin to address this issue and present results from a series of focus groups which explored public views and concerns about engaging with new models of personal health data sharing in the UK. We found that participants were broadly in favour of a shift towards new decentralised models of data sharing. Retaining ‘proof’ of health information stored about patients and the capacity to provide permanent audit trails, enabled by immutable and transparent properties of DLT, were regarded as particularly valuable for our participants and prospective data custodians. Participants also identified other potential benefits such as supporting people to become more health data literate and enabling patients to make informed decisions about how their data was shared and with whom. However, participants also voiced concerns about the potential to further exacerbate existing health and digital inequalities. Participants were also apprehensive about the removal of intermediaries in the design of personal health informatics systems…(More)”.

Toward a 21st Century National Data Infrastructure: Mobilizing Information for the Common Good


Report by National Academies of Sciences, Engineering, and Medicine: “Historically, the U.S. national data infrastructure has relied on the operations of the federal statistical system and the data assets that it holds. Throughout the 20th century, federal statistical agencies aggregated survey responses of households and businesses to produce information about the nation and diverse subpopulations. The statistics created from such surveys provide most of what people know about the well-being of society, including health, education, employment, safety, housing, and food security. The surveys also contribute to an infrastructure for empirical social- and economic-sciences research. Research using survey-response data, with strict privacy protections, led to important discoveries about the causes and consequences of important societal challenges and also informed policymakers. Like other infrastructure, people can easily take these essential statistics for granted. Only when they are threatened do people recognize the need to protect them…(More)”.

The Keys to Democracy: Sortition as a New Model for Citizen Power


Book by Maurice Pope: “Sortition — also known as random selection — puts ordinary people in control of decision-making in government. This may seem novel, but it is how the original Athenian democracy worked. In fact, what is new is our belief that electoral systems are democratic. It was self-evident to thinkers from Aristotle to the Renaissance that elections always resulted in oligarchies, or rule by elites.

In this distillation of a lifetime’s thinking about the history and principles of democracy, Maurice Pope presents a new model of governance that replaces elected politicians with assemblies selected by lot. The re-introduction of sortition, he believes, offers a way out of gridlock, apathy, alienation and polarisation by giving citizens back their voice.

Pope’s work — published posthumously — grew from his unique perspective as a widely travelled English classicist who also experienced the injustice of apartheid rule in South Africa. His great mind was as much at home with the history of philosophy as the mathematics of probability.

Governments and even the EU have tried out sortition in recent years; the UK, France and several countries have attempted to tackle climate change through randomly selected citizens’ assemblies. The city of Paris and the German-speaking community of Belgium have set up permanent upper houses chosen by lot. Several hundred such experiments around the world are challenging the assumption that elections are the only or ideal route to credible, effective government.

Writing before these mostly advisory bodies took shape, Pope lays out a vision for a government entirely based on random selection and citizen deliberation. In arguing for this more radical goal, he draws on the glories of ancient Athens, centuries of use in Venice, the success of randomly selected juries and the philosophical advantages of randomness. Sortition-based democracy, he believed, is the only plausible way to achieve each element of Abraham Lincoln’s call for a democratic government “of the people, by the people, for the people”…(More)”.

Foresight is a messy methodology but a marvellous mindset


Blog by Berta Mizsei: “…From my first few forays into foresight, it seemed that it employed desk research and expert workshops, but refrained from the use of data and from testing the solidity of assumptions. This can make scenarios weak and anecdotal, something experts justify by stating that scenarios are meant to be a ‘first step to start a discussion’.

The deficiencies of foresight became more evident when I took part in the process – so much of what ends up in imagined narratives depends on whether an expert was chatty during a workshop, or on the background of the expert writing the scenario.

As a young researcher coming from a quantitative background, this felt alien and alarming.

However, as it turns out, my issue was not with foresight per se, but rather with a certain way of doing it, one that is insufficiently grounded in sound research methods. In short, I am disturbed by ‘bad’ foresight. Foresight’s newly-found popularity means that there is more demand than supply for foresight experts, thus the prevalence of questionable foresight methodology has increased – something that was discussed during a dedicated session at this year’s Ideas Lab (CEPS’ flagship annual event).

One culprit is the Commission. Its foresight relies heavily on ‘backcasting’, a planning method that starts with a desirable future and works backwards to identify ways to achieve that outcome. One example is the 2022 Strategic Foresight Report ‘Twinning the green and digital transitions in the new geopolitical context’ that mapped out ways to get to the ideal future the Commission cabinet had imagined.

Is this useful? Undoubtedly.

However, it is also single-mindedly deterministic about the future of environmental policy, which is both notoriously complex and of critical importance to the current Commission. Similar hubris (or malpractice) is evident across various EU apparatuses – policymakers have a clear vision of what they want to happen and they invest into figuring out how to make that a reality without admitting how turbulent and unpredictable the future is. This is commendable and politically advantageous… but it is not foresight.

It misses one of foresight’s main virtues: forcing us to consider alternative futures…(More)”.

Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy


Paper by Gema Santos-Hermosa et al: “Open data has been conceptualised as a strategic form of public knowledge. Tightly connected with the developments in open government and open science, the main claim is that access to open data (OD) might be a catalyser of social innovation and citizen empowerment. Nevertheless, the so-called (open) data divide, as a problem connected to the situation of OD usage and engagement, is a concern.

In this chapter, we introduce the OD usage trends, focusing on the role played by (open) data literacy amongst either users or producers: citizens, professionals, and researchers. Indeed, we attempted to cover the problem of OD through a holistic approach including two areas of research and practice: open government data (OGD) and open research data (ORD). After uncovering several factors blocking OD consumption, we point out that more OD is being published (albeit with low usage), and we overview the research on data literacy. While the intentions of stakeholders are driven by many motivations, the abilities that put them in the condition to enhance OD might require further attention. In the end, we focus on several lifelong learning activities supporting open data literacy, uncovering the challenges ahead to unleash the power of OD in society…(More)”.

Americans Can’t Consent to Companies Use of their Data


A Report from the Annenberg School for Communication: “Consent has always been a central part of Americans’ interactions with the commercial internet. Federal and state laws, as well as decisions from the Federal Trade Commission (FTC), require either implicit (“opt out”) or explicit (“opt in”) permission from individuals for companies to take and use data about them. Genuine opt out and opt in consent requires that people have knowledge about commercial data-extraction practices as well as a belief they can do something about them. As we approach the 30th anniversary of the commercial internet, the latest Annenberg national survey finds that Americans have neither. High percentages of Americans don’t know, admit they don’t know, and believe they can’t do anything about basic practices and policies around companies’ use of people’s data…
High levels of frustration, concern, and fear compound Americans’ confusion: 80% say they have little control over how marketers can learn about them online; 80% agree that what companies know about them from their online behaviors can harm them. These and related discoveries from our survey paint a picture of an unschooled and admittedly incapable society that rejects the internet industry’s insistence that people will accept tradeoffs for benefits and despairs of its inability to predictably control its digital life in the face of powerful corporate forces. At a time when individual consent lies at the core of key legal frameworks governing the collection and use of personal information, our findings describe an environment where genuine consent may not be possible….The aim of this report is to chart the particulars of Americans’ lack of knowledge about the commercial use of their data and their “dark resignation” in connection to it. Our goal is also to raise questions and suggest solutions about public policies that allow companies to gather, analyze, trade, and otherwise benefit from information they extract from large populations of people who are uninformed about how that information will be used and deeply concerned about the consequences of its use. In short, we find that informed consent at scale is a myth, and we urge policymakers to act with that in mind.”…(More)”.

Innovation Power: Why Technology Will Define the Future of Geopolitics


Essay by Eric Schmidt: “When Russian forces marched on Kyiv in February 2022, few thought Ukraine could survive. Russia had more than twice as many soldiers as Ukraine. Its military budget was more than ten times as large. The U.S. intelligence community estimated that Kyiv would fall within one to two weeks at most.

Outgunned and outmanned, Ukraine turned to one area in which it held an advantage over the enemy: technology. Shortly after the invasion, the Ukrainian government uploaded all its critical data to the cloud, so that it could safeguard information and keep functioning even if Russian missiles turned its ministerial offices into rubble. The country’s Ministry of Digital Transformation, which Ukrainian President Volodymyr Zelensky had established just two years earlier, repurposed its e-government mobile app, Diia, for open-source intelligence collection, so that citizens could upload photos and videos of enemy military units. With their communications infrastructure in jeopardy, the Ukrainians turned to Starlink satellites and ground stations provided by SpaceX to stay connected. When Russia sent Iranian-made drones across the border, Ukraine acquired its own drones specially designed to intercept their attacks—while its military learned how to use unfamiliar weapons supplied by Western allies. In the cat-and-mouse game of innovation, Ukraine simply proved nimbler. And so what Russia had imagined would be a quick and easy invasion has turned out to be anything but.

Ukraine’s success can be credited in part to the resolve of the Ukrainian people, the weakness of the Russian military, and the strength of Western support. But it also owes to the defining new force of international politics: innovation power. Innovation power is the ability to invent, adopt, and adapt new technologies. It contributes to both hard and soft power. High-tech weapons systems increase military might, new platforms and the standards that govern them provide economic leverage, and cutting-edge research and technologies enhance global appeal. There is a long tradition of states harnessing innovation to project power abroad, but what has changed is the self-perpetuating nature of scientific advances. Developments in artificial intelligence in particular not only unlock new areas of scientific discovery; they also speed up that very process. Artificial intelligence supercharges the ability of scientists and engineers to discover ever more powerful technologies, fostering advances in artificial intelligence itself as well as in other fields—and reshaping the world in the process…(More)”.

Ten lessons for data sharing with a data commons


Article by Robert L. Grossman: “..Lesson 1. Build a commons for a specific community with a specific set of research challenges

Although there are a few data repositories that serve the general scientific community that have proved successful, in general data commons that target a specific user community have proven to be the most successful. The first lesson is to build a data commons for a specific research community that is struggling to answer specific research challenges with data. As a consequence, a data commons is a partnership between the data scientists developing and supporting the commons and the disciplinary scientists with the research challenges.

Lesson 2. Successful commons curate and harmonize the data

Successful commons curate and harmonize the data and produce data products of broad interest to the community. It’s time consuming, expensive, and labor intensive to curate and harmonize data, by much of the value of data commons is centralizing this work so that it can be done once instead of many times by each group that needs the data. These days, it is very easy to think of a data commons as a platform containing data, not spend the time curating or harmonizing it, and then be surprised that the data in the commons is not used more widely used and its impact is not as high as expected.

Lesson 3. It’s ultimately about the data and its value to generate new research discoveries

Despite the importance of a study, few scientists will try to replicate previously published studies. Instead, data is usually accessed if it can lead to a new high impact paper. For this reason, data commons play two different but related roles. First, they preserve data for reproducible science. This is a small fraction of the data access, but plays a critical role in reproducible science. Second, data commons make data available for new high value science.

Lesson 4. Reduce barriers to access to increase usage

A useful rule of thumb is that every barrier to data access cuts down access by a factor of 10. Common barriers that reduce use of a commons include: registration vs no-registration; open access vs controlled access; click through agreements vs signing of data usage agreements and approval by data access committees; license restrictions on the use of the data vs no license restrictions…(More)”.