Technology and the Global Struggle for Democracy


Essay by Manuel Muniz: “The commemoration of the first anniversary of the January 6, 2021, attack on the US Capitol by supporters of former President Donald Trump showed that the extreme political polarization that fueled the riot also frames Americans’ interpretations of it. It would, however, be gravely mistaken to view what happened as a uniquely American phenomenon with uniquely American causes. The disruption of the peaceful transfer of power that day was part of something much bigger.

As part of the commemoration, President Joe Biden said that a battle is being fought over “the soul of America.” What is becoming increasingly clear is that this is also true of the international order: its very soul is at stake. China is rising and asserting itself. Populism is widespread in the West and major emerging economies. And chauvinistic nationalism has re-emerged in parts of Europe. All signs point to increasing illiberalism and anti-democratic sentiment around the world.

Against this backdrop, the US hosted in December a (virtual) “Summit for Democracy” that was attended by hundreds of national and civil-society leaders. The message of the gathering was clear: democracies must assert themselves firmly and proactively. To that end, the summit devoted numerous sessions to studying the digital revolution and its potentially harmful implications for our political systems.

Emerging technologies pose at least three major risks for democracies. The first concerns how they structure public debate. Social networks balkanize public discourse by segmenting users into ever smaller like-minded communities. Algorithmically-driven information echo chambers make it difficult to build social consensus. Worse, social networks are not liable for the content they distribute, which means they can allow misinformation to spread on their platforms with impunity…(More)”.

A data ‘black hole’: Europol ordered to delete vast store of personal data


Article by Apostolis Fotiadis, Ludek Stavinoha, Giacomo Zandonini, Daniel Howden: “…The EU’s police agency, Europol, will be forced to delete much of a vast store of personal data that it has been found to have amassed unlawfully by the bloc’s data protection watchdog. The unprecedented finding from the European Data Protection Supervisor (EDPS) targets what privacy experts are calling a “big data ark” containing billions of points of information. Sensitive data in the ark has been drawn from crime reports, hacked from encrypted phone services and sampled from asylum seekers never involved in any crime.

According to internal documents seen by the Guardian, Europol’s cache contains at least 4 petabytes – equivalent to 3m CD-Roms or a fifth of the entire contents of the US Library of Congress. Data protection advocates say the volume of information held on Europol’s systems amounts to mass surveillance and is a step on its road to becoming a European counterpart to the US National Security Agency (NSA), the organisation whose clandestine online spying was revealed by whistleblower Edward Snowden….(More)”.

Citizen Power Europe. The Making of a European Citizens’ Assembly


Paper by Alberto Alemanno and Kalypso Nicolaïdis: “This article argues that if the EU is to recover its dented popularity among European publics, we need to build a European democratic ecosystem to nurture, scale and ultimate accommodate the daily competing claims of Europe’s citizens. To attain this objective, it presents and discusses three big ideas that are at the heart of the renewed EU ecosystem that we are calling for. These are: participation beyond voting; a transnational and inclusive public space; and, a democratic panopticon for greater accountability. Promisingly enough, these ideas already find reflection in the first batch of the citizens’ recommendations emerging from the Conference on the Future of Europe (CoFoE). Even if these recommendations still need to be refined through deliberation by the plenary of the CoFoE, they add up a clear and urgent message: let’s tap into our collective intelligence and democratic imagination to construct a pan-European public sphere by enhancing mutual connections, knowledge and empowerment between citizens across borders…(More)”.

Our Common AI Future – A Geopolitical Analysis and Road Map, for AI Driven Sustainable Development, Science and Data Diplomacy


(Open Access) Book by Francesco Lapenta: “The premise of this concise but thorough book is that the future, while uncertain and open, is not arbitrary, but the result of a complex series of competing decisions, actors, and events that began in the past, have reached a certain configuration in the present, and will continue to develop into the future. These past and present conditions constitute the basis and origin of future developments that have the potential to shape into a variety of different possible, probable, undesirable or desirable future scenarios. The realisation that these future scenarios cannot be totally arbitrary gives scope to the study of the past, indispensable to fully understand the facts and actors and forces that contributed to the formation of the present, and how certain systems, or dominant models, came to be established (I). The relative openness of future scenarios gives scope to the study of what competing forces and models might exist, their early formation, actors, and initiatives (II) and how they may act as catalysts for alternative theories, models (III and IV) and actions that can influence our future and change its path (V)…

The analyses in the book, which are loosely divided into three phases, move from the past to the present, and begin with identifying best practices and some of the key initiatives that have attempted to achieve these global collaborative goals over the last few decades. Then, moving forward, they describe a roadmap to a possible future based on already existing and developing theories, initiatives, and tools that could underpin these global collaborative efforts in the specific areas of AI and Sustainable Development. In the Road Map for AI Driven Sustainable Development, the analyses identify and stand on the shoulders of a number of past and current global initiatives that have worked for decades to lay the groundwork for this alternative evolutionary and collaborative model. The title of this book directs, acknowledges, and encourages readers to engage with one of these pivotal efforts, the “Our Common Future” report, the Brundtland’s commission report which was published in 1987 by the World Commission on Environment and Development (WCED). Building on the report’s ambitious humanistic and socioeconomic landscape and ambitions, the analyses investigate a variety of existing and developing best practices that could lead to, or inspire, a shared scientific collaborative model for AI development. Based on the understanding that, despite political rivalry and competition, governments should collaborate on at least two fundamental issues: One, to establish a set of global “Red Lines” to prohibit the development and use of AIs in specific applications that might pose an ethical or existential threat to humanity and the planet. And two, create a set of “Green Zones” for scientific diplomacy and cooperation in order to capitalize on the opportunities that the impending AIs era may represent in confronting major collective challenges such as the health and climate crises, the energy crisis, and the sustainable development goals identified in the report and developed by other subsequent global initiatives…(More)”.

The Crowdsourced Panopticon


Book by Jeremy Weissman: “Behind the omnipresent screens of our laptops and smartphones, a digitally networked public has quickly grown larger than the population of any nation on Earth. On the flipside, in front of the ubiquitous recording devices that saturate our lives, individuals are hyper-exposed through a worldwide online broadcast that encourages the public to watch, judge, rate, and rank people’s lives. The interplay of these two forces – the invisibility of the anonymous crowd and the exposure of the individual before that crowd – is a central focus of this book. Informed by critiques of conformity and mass media by some of the greatest philosophers of the past two centuries, as well as by a wide range of historical and empirical studies, Weissman helps shed light on what may happen when our lives are increasingly broadcast online for everyone all the time, to be judged by the global community…(More)”.

‘In Situ’ Data Rights


Essay by Marshall W Van Alstyne, Georgios Petropoulos, Geoffrey Parker, and Bertin Martens: “…Data portability sounds good in theory—number portability improved telephony—but this theory has its flaws.

  • Context: The value of data depends on context. Removing data from that context removes value. A portability exercise by experts at the ProgrammableWeb succeeded in downloading basic Facebook data but failed on a re-upload.1 Individual posts shed the prompts that preceded them and the replies that followed them. After all, that data concerns others.
  • Stagnation: Without a flow of updates, a captured stock depreciates. Data must be refreshed to stay current, and potential users must see those data updates to stay informed.
  • Impotence: Facts removed from their place of residence become less actionable. We cannot use them to make a purchase when removed from their markets or reach a friend when they are removed from their social networks. Data must be reconnected to be reanimated.
  • Market Failure. Innovation is slowed. Consider how markets for business analytics and B2B services develop. Lacking complete context, third parties can only offer incomplete benchmarking and analysis. Platforms that do offer market overview services can charge monopoly prices because they have context that partners and competitors do not.
  • Moral Hazard: Proposed laws seek to give merchants data portability rights but these entail a problem that competition authorities have not anticipated. Regulators seek to help merchants “multihome,” to affiliate with more than one platform. Merchants can take their earned ratings from one platform to another and foster competition. But, when a merchant gains control over its ratings data, magically, low reviews can disappear! Consumers fraudulently edited their personal records under early U.K. open banking rules. With data editing capability, either side can increase fraud, surely not the goal of data portability.

Evidence suggests that following GDPR, E.U. ad effectiveness fell, E.U. Web revenues fell, investment in E.U. startups fell, the stock and flow of apps available in the E.U. fell, while Google and Facebook, who already had user data, gained rather than lost market share as small firms faced new hurdles the incumbents managed to avoid. To date, the results are far from regulators’ intentions.

We propose a new in situ data right for individuals and firms, and a new theory of benefits. Rather than take data from the platform, or ex situ as portability implies, let us grant users the right to use their data in the location where it resides. Bring the algorithms to the data instead of bringing the data to the algorithms. Users determine when and under what conditions third parties access their in situ data in exchange for new kinds of benefits. Users can revoke access at any time and third parties must respect that. This patches and repairs the portability problems…(More).”

The unmet potential of open data


Essay by Jane Bambauer: “Open Data holds great promise — and more than thought leaders appreciate. 

Open access to data can lead to a much richer and more diverse range of research and development, hastening innovation. That’s why scientific journals are asking authors to make their data available, why governments are making publicly held records open by default, and why even private companies provide subsets of their data for general research use. Facebook, for example, launched an effort to provide research data that could be used to study the impact of social networks on election outcomes. 

Yet none of these moves have significantly changed the landscape. Because of lingering skepticism and some legitimate anxieties, we have not yet democratized access to Big Data.

There are a few well-trodden explanations for this failure — or this tragedy of the anti-commons — but none should dissuade us from pushing forward….

Finally, creating the infrastructure required to clean data, link it to other data sources, and make it useful for the most valuable research questions will not happen without a significant investment from somebody, be it the government or a private foundation. As Stefaan Verhulst, Andrew Zahuranec, and Andrew Young have explained, creating a useful data commons requires much more infrastructure and cultural buy-in than one might think. 

From my perspective, however, the greatest impediment to the open data movement has been a lack of vision within the intelligentsia. Outside a few domains like public health, intellectuals continue to traffic in and thrive on anecdotes and narratives. They have not perceived or fully embraced how access to broad and highly diverse data could radically change newsgathering (we could observe purchasing or social media data in real time), market competition (imagine designing a new robot using data collected from Uber’s autonomous cars), and responsive government (we could directly test claims of cause and effect related to highly salient issues during election time). 

With a quiet accumulation of use cases and increasing competence in handling and digesting data, we will eventually reach a tipping point where the appetite for more useful research data will outweigh the concerns and inertia that have bogged down progress in the open data movement…(More)”.

Quarantined Data? The impact, scope & challenges of open data during COVID


Chapter by Álvaro V. Ramírez-Alujas: “How do rates of COVID19 infection increase? How do populations respond to lockdown measures? How is the pandemic affecting the economic and social activity of communities beyond health? What can we do to mitigate risks and support families in this context? The answer to these and other key questions is part of the intense global public debate on the management of the health crisis and how appropriate public policy measures have been taken in order to combat the impact and effects of COVID19 around the world. The common ground to all of them? The availability and use of public data and information. This chapter reflects on the relevance of public information and the availability, processing and use of open data as the primary hub and key ingredient in the responsiveness of governments and public institutions to the COVID19 pandemic and its multiple impacts on society. Discussions are underway concerning the scope, paradoxes, lessons learned, and visible challenges with respect to the available evidence and comparative analysis of government strategies in the region, incorporating the urgent need to shift towards a more robust, sustainable data infrastructure anchored in a logic of strengthening the ecosystem of actors (public and private sectors, civil society and the scientific community) to shape a framework of governance, and a strong, emerging institutional architecture based on data management for sustainable development on a human scale…(More)”.

Incentivising research data sharing: a scoping review


Paper by Helen Buckley Woods and Stephen Pinfield: “Numerous mechanisms exist to incentivise researchers to share their data. This scoping review aims to identify and summarise evidence of the efficacy of different interventions to promote open data practices and provide an overview of current research….Seven major themes in the literature were identified: publisher/journal data sharing policies, metrics, software solutions,research data sharing agreements in general, open science ‘badges’, funder mandates, and initiatives….

A number of key messages for data sharing include: the need to build on existing cultures and practices, meeting people where they are and tailoring interventions to support them; the importance of publicising and explaining the policy/service widely; the need to have disciplinary data champions to model good practice and drive cultural change; the requirement to resource interventions properly; and the imperative to provide robust technical infrastructure and protocols, such as labelling of data sets, use of DOIs, data standards and use of data repositories….(More)”.

If AI Is Predicting Your Future, Are You Still Free?


Essay by Carissa Veliz” “…Today, prediction is mostly done through machine learning algorithms that use statistics to fill in the blanks of the unknown. Text algorithms use enormous language databases to predict the most plausible ending to a string of words. Game algorithms use data from past games to predict the best possible next move. And algorithms that are applied to human behavior use historical data to infer our future: what we are going to buy, whether we are planning to change jobs, whether we are going to get sick, whether we are going to commit a crime or crash our car. Under such a model, insurance is no longer about pooling risk from large sets of people. Rather, predictions have become individualized, and you are increasingly paying your own way, according to your personal risk scores—which raises a new set of ethical concerns.

An important characteristic of predictions is that they do not describe reality. Forecasting is about the future, not the present, and the future is something that has yet to become real. A prediction is a guess, and all sorts of subjective assessments and biases regarding risk and values are built into it. There can be forecasts that are more or less accurate, to be sure, but the relationship between probability and actuality is much more tenuous and ethically problematic than some assume.

Institutions today, however, often try to pass off predictions as if they were a model of objective reality. And even when AI’s forecasts are merely probabilistic, they are often interpreted as deterministic in practice—partly because human beings are bad at understanding probability and partly because the incentives around avoiding risk end up reinforcing the prediction. (For example, if someone is predicted to be 75 percent likely to be a bad employee, companies will not want to take the risk of hiring them when they have candidates with a lower risk score)…(More)”.