The Dead Internet To Come


Essay by Robert Mariani: “On an ordinary morning, you cradle a steaming cup of coffee while scrolling through your social media feeds. You’re in your happy place, engaging with the thoughts and creations of countless individuals at your leisure.

But something feels off. There’s no proof, but your instincts are sure of it. For a while now, the microcelebrities on Twitter have been engaging with you more than they should be, more than they were a few months ago. You’ve noticed patterns in conversations that are beyond your conscious mind’s power to decipher; there’s a rhythm to trends and replies that did not exist before.

A vague dread grips you. Why is everything a little bit different now? The smallest details are wrong. Your favorite posters have vanished from all platforms. There haven’t been any new memes for some time, only recycled iterations of old ones. Influencers are coordinated in their talking points like puppets being pulled by the same strings. Your favorite niche YouTuber has only recently been posting new content with any regularity. Is this a message? Is this what schizophrenia is like?

Dread gives way to the cold stab of terrible certainty as it hits you: they aren’t people. They’re bots. The Internet is all bots. Under your nose, the Internet of real people has gradually shifted into a digital world of shadow puppets. They look like people, they act like people, but there are no people left. Well, there’s you and maybe a few others, but you can’t tell the difference, because the bots wear a million masks. You might be alone, and have been for a while. It’s a horror worse than blindness: the certainty that your vision is clear but there is no genuine world to be seen.

This is the world of the Internet after about 2016 — at least according to the Dead Internet Theory, whose defining description appeared in an online forum in 2021.The theory suggests a conspiracy to gaslight the entire world by replacing the user-powered Internet with an empty, AI-powered one populated by bot impostors. It explains why all the cool people get banned, why Internet culture has become so stale, why the top influencers are the worst ones, and why discourse cycles seem so mechanically uniform. The perpetrators are the usual suspects: the U.S. government trying to control public opinion and corporations trying to get us to buy more stuff…(More)”.

The contested role of AI ethics boards in smart societies: a step towards improvement based on board composition by sortition


Paper by Ludovico Giacomo Conti & Peter Seele: “The recent proliferation of AI scandals led private and public organisations to implement new ethics guidelines, introduce AI ethics boards, and list ethical principles. Nevertheless, some of these efforts remained a façade not backed by any substantive action. Such behaviour made the public question the legitimacy of the AI industry and prompted scholars to accuse the sector of ethicswashing, machinewashing, and ethics trivialisation—criticisms that spilt over to institutional AI ethics boards. To counter this widespread issue, contributions in the literature have proposed fixes that do not consider its systemic character and are based on a top-down, expert-centric governance. To fill this gap, we propose to make use of qualified informed lotteries: a two-step model that transposes the documented benefits of the ancient practice of sortition into the selection of AI ethics boards’ members and combines them with the advantages of a stakeholder-driven, participative, and deliberative bottom-up process typical of Citizens’ Assemblies. The model permits increasing the public’s legitimacy and participation in the decision-making process and its deliverables, curbing the industry’s over-influence and lobbying, and diminishing the instrumentalisation of ethics boards. We suggest that this sortition-based approach may provide a sound base for both public and private organisations in smart societies for constructing a decentralised, bottom-up, participative digital democracy…(More)”.

Generative AI, Jobs, and Policy Response


Paper by the Global Partnership on AI: “Generative AI and the Future of Work remains notably absent from the global AI governance dialogue. Given the transformative potential of this technology in the workplace, this oversight suggests a significant gap, especially considering the substantial implications this technology has for workers, economies and society at large. As interest grows in the effects of Generative AI on occupations, debates centre around roles being replaced or enhanced by technology. Yet there is an incognita, the “Big Unknown”, an important number of workers whose future depends on decisions yet to be made
In this brief, recent articles about the topic are surveyed with special attention to the “Big Unknown”. It is not a marginal number: nearly 9% of the workforce, or 281 million workers worldwide, are in this category. Unlike previous AI developments which focused on automating narrow tasks, Generative AI models possess the scope, versatility, and economic viability to impact jobs across multiple industries and at varying skill levels. Their ability to produce human-like outputs in areas like language, content creation and customer interaction, combined with rapid advancement and low deployment costs, suggest potential near-term impacts that are much broader and more abrupt than prior waves of AI. Governments, companies, and social partners should aim to minimize any potential negative effects from Generative AI technology in the world of work, as well as harness potential opportunities to support productivity growth and decent work. This brief presents concrete policy recommendations at the global and local level. These insights, are aimed to guide the discourse towards a balanced and fair integration of Generative AI in our professional landscape To navigate this uncertain landscape and ensure that the benefits of Generative AI are equitably distributed, we recommend 10 policy actions that could serve as a starting point for discussion and implementation…(More)”.

The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice


Paper by Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang: “Despite the growing consensus that stakeholders affected by AI systems should participate in their design, enormous variation and implicit disagreements exist among current approaches. For researchers and practitioners who are interested in taking a participatory approach to AI design and development, it remains challenging to assess the extent to which any participatory approach grants substantive agency to stakeholders. This article thus aims to ground what we dub the “participatory turn” in AI design by synthesizing existing theoretical literature on participation and through empirical investigation and critique of its current practices. Specifically, we derive a conceptual framework through synthesis of literature across technology design, political theory, and the social sciences that researchers and practitioners can leverage to evaluate approaches to participation in AI design. Additionally, we articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners. We use these empirical findings to understand the current state of participatory practice and subsequently provide guidance to better align participatory goals and methods in a way that accounts for practical constraints…(More)”.

Public Net Worth


Book by Jacob Soll, Willem Buiter, John Crompton, Ian Ball, and Dag Detter: “As individuals, we depend on the services that governments provide. Collectively, we look to them to tackle the big problems – from long-term climate and demographic change to short-term crises like pandemics or war.  This is very expensive, and is getting more so.

But governments don’t provide – or use – basic financial information that every business is required to maintain. They ignore the value of public assets and most liabilities. This leads to inefficiency and bad decision-making, and piles up problems for the future.

Governments need to create balance sheets that properly reflect assets and liabilities, and to understand their future obligations and revenue prospects. Net Worth – both today and for the future – should be the measure of financial strength and success.

Only if this information is put at the centre of government financial decision-making can the present challenges to public finances around the world be addressed effectively, and in a way that is fair to future generations.

The good news is that there are ways to deal with these problems and make government finances more resilient and fairer to future generations.

The facts, and the solutions, are non-partisan, and so is this book. Responsible leaders of any political persuasion need to understand the issues and the tools that can enable them to deliver policy within these constraints…(More)”.

Open: A Pan-ideological Panacea, a Free Floating Signifier


Paper by Andrea Liu: “Open” is a word that originated from FOSS (Free and Open Software movement) to mean a Commons-based, non-proprietary form of computer software development (Linux, Apache) based on a decentralized, poly-hierarchical, distributed labor model. But the word “open” has now acquired an unnerving over-elasticity, a word that means so many things that at times it appears meaningless. This essay is a rhetorical analysis (if not a deconstruction) of how the term “open” functions in digital culture, the promiscuity (if not gratuitousness) with which the term “open” is utilized in the wider society, and the sometimes blatantly contradictory ideologies a indiscriminately lumped together under this word…(More)”

Data Sandboxes: Managing the Open Data Spectrum


Primer by Uma Kalkar, Sampriti Saxena, and Stefaan Verhulst: “Opening up data offers opportunities to enhance governance, elevate public and private services, empower individuals, and bolster public well-being. However, achieving the delicate balance between open data access and the responsible use of sensitive and valuable information presents complex challenges. Data sandboxes are an emerging approach to balancing these needs.

In this white paper, The GovLab seeks to answer the following questions surrounding data sandboxes: What are data sandboxes? How can data sandboxes empower decision-makers to unlock the potential of open data while maintaining the necessary safeguards for data privacy and security? Can data sandboxes help decision-makers overcome barriers to data access and promote purposeful, informed data (re-)use?

The six characteristics of a data sandbox. Image by The GovLab.

After evaluating a series of case studies, we identified the following key findings:

  • Data sandboxes present six unique characteristics that make them a strong tool for facilitating open data and data re-use. These six characteristics are: controlled, secure, multi-sectoral and collaborative, high computing environments, temporal in nature, adaptable, and scalable.
  • Data sandboxes can be used for: pre-engagement assessment, data mesh enablement, rapid prototyping, familiarization, quality and privacy assurance, experimentation and ideation, white labeling and minimization, and maturing data insights.
  • There are many benefits to implementing data sandboxes. We found ten value propositions, such as: decreasing risk in accessing more sensitive data; enhancing data capacity; and fostering greater experimentation and innovation, to name a few.
  • When looking to implement a data sandbox, decision-makers should consider how they will attract and obtain high-quality, relevant data, keep the data fresh for accurate re-use, manage risks of data (re-)use, and translate and scale up sandbox solutions in real markets.
  • Advances in the use of the Internet of Things and Privacy Enhancing Technologies could help improve the creation, preparation, analysis, and security of data in a data sandbox. The development of these technologies, in parallel with European legislative measures such as the Digital Markets Act, the Data Act and the Data Governance Act, can improve the way data is unlocked in a data sandbox, improving trust and encouraging data (re-)use initiatives…(More)” (FULL PRIMER)”

Seven routes to experimentation in policymaking: a guide to applied behavioural science methods


OECD Resource: “…offers guidelines and a visual roadmap to help policymakers choose the most fit-for-purpose evidence collection method for their specific policy challenge.

Source: Elaboration of the authors: Varazzani, C., Emmerling. T., Brusoni, S., Fontanesi, L., and Tuomaila, H., (2023), “Seven routes to experimentation: A guide to applied behavioural science methods,” OECD Working Papers on Public Governance, OECD Publishing, Paris. Note: The authors elaborated the map based on a previous map ideated, researched, and designed by Laura Castro Soto, Judith Wagner, and Torben Emmerling (sevenroutes.com).

The seven applied behavioural science methods:

  • Randomised Controlled Trials (RCTs) are experiments that can demonstrate a causal relationship between an intervention and an outcome, by randomly assigning individuals to an intervention group and a control group.
  • A/B testing tests two or more manipulations (such as variants of a webpage) to assess which performs better in terms of a specific goal or metric.
  • Difference-in-Difference is an experimental method that estimates the causal effect of an intervention by comparing changes in outcomes between an intervention group and a control group before and after the intervention.
  • Before-After studies assess the impact of an intervention or event by comparing outcomes or measurements before and after its occurrence, without a control group.
  • Longitudinal studies collect data from the same individuals or groups over an extended period to assess trends over time.
  • Correlational studies help to investigate the relationship between two or more variables to determine if they vary together (without implying causation).
  • Qualitative studies explore the underlying meanings and nuances of a phenomenon through interviews, focus group sessions, or other exploratory methods based on conversations and observations…(More)”.

When it comes to AI and democracy, we cannot be careful enough


Article by Marietje Schaake: “Next year is being labelled the “Year of Democracy”: a series of key elections are scheduled to take place, including in places with significant power and populations, such as the US, EU, India, Indonesia and Mexico. In many of these jurisdictions, democracy is under threat or in decline. It is certain that our volatile world will look different after 2024. The question is how — and why.

Artificial intelligence is one of the wild cards that may well play a decisive role in the upcoming elections. The technology already features in varied ways in the electoral process — yet many of these products have barely been tested before their release into society.

Generative AI, which makes synthetic texts, videos and voice messages easy to produce and difficult to distinguish from human-generated content, has been embraced by some political campaign teams. A controversial video showing a crumbling world should Joe Biden be re-elected was not created by a foreign intelligence service seeking to manipulate US elections, but by the Republican National Committee. 

Foreign intelligence services are also using generative AI to boost their influence operations. My colleague at Stanford, Alex Stamos, warns that: “What once took a team of 20 to 40 people working out of [Russia or Iran] to produce 100,000 pieces can now be done by one person using open-source gen AI”.

AI also makes it easier to target messages so they reach specific audiences. This individualised experience will increase the complexity of investigating whether internet users and voters are being fed disinformation.

While much of generative AI’s impact on elections is still being studied, what is known does not reassure. We know people find it hard to distinguish between synthetic media and authentic voices, making it easy to deceive them. We also know that AI repeats and entrenches bias against minorities. Plus, we’re aware that AI companies seeking profits do not also seek to promote democratic values.  

Many members of the teams hired to deal with foreign manipulation and disinformation by social media companies, particularly since 2016, have been laid off. YouTube has explicitly said it will no longer remove “content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections”. It is, of course, highly likely that lies about past elections will play a role in 2024 campaigns.

Similarly, after Elon Musk took over X, formerly known as Twitter, he gutted trust and safety teams. Right when defence barriers are needed the most, they are being taken down…(More)”.

International Definitions of Artificial Intelligence


Report by IAPP: “Computer scientist John McCarthy coined the term artificial intelligence in 1955, defining it as “the science and engineering of making intelligent machines.” He organized the Dartmouth Summer Research Project on Artificial Intelligence a year later — an event that many consider the birthplace of the field.

In today’s world, the definition of AI has been in continuous evolution, its contours and constraints changing to align with current and perhaps future technological progress and cultural contexts. In fact, most papers and articles are quick to point out the lack of common consensus around the definition of AI. As a resource from British research organization the Ada Lovelace Institute states, “We recognise that the terminology in this area is contested. This is a fast-moving topic, and we expect that terminology will evolve quickly.” The difficulty in defining AI is illustrated by what AI historian Pamela McCorduck called the “odd paradox,” referring to the idea that, as computer scientists find new and innovative solutions, computational techniques once considered AI lose the title as they become common and repetitive.

The indeterminate nature of the term poses particular challenges in the regulatory space. Indeed, in 2017 a New York City Council task force downgraded its mission to regulate the city’s use of automated decision-making systems to just defining the types of systems subject to regulation because it could not agree on a workable, legal definition of AI.

With this understanding, the following chart provides a snapshot of some of the definitions of AI from various global and sectoral (government, civil society and industry) perspectives. The chart is not an exhaustive list. It allows for cross-contextual comparisons from key players in the AI ecosystem…(More)”