How ChatGPT and other AI tools could disrupt scientific publishing


Article by Gemma Conroy: “When radiologist Domenico Mastrodicasa finds himself stuck while writing a research paper, he turns to ChatGPT, the chatbot that produces fluent responses to almost any query in seconds. “I use it as a sounding board,” says Mastrodicasa, who is based at the University of Washington School of Medicine in Seattle. “I can produce a publication-ready manuscript much faster.”

Mastrodicasa is one of many researchers experimenting with generative artificial-intelligence (AI) tools to write text or code. He pays for ChatGPT Plus, the subscription version of the bot based on the large language model (LLM) GPT-4, and uses it a few times a week. He finds it particularly useful for suggesting clearer ways to convey his ideas. Although a Nature survey suggests that scientists who use LLMs regularly are still in the minority, many expect that generative AI tools will become regular assistants for writing manuscripts, peer-review reports and grant applications.

Those are just some of the ways in which AI could transform scientific communication and publishing. Science publishers are already experimenting with generative AI in scientific search tools and for editing and quickly summarizing papers. Many researchers think that non-native English speakers could benefit most from these tools. Some see generative AI as a way for scientists to rethink how they interrogate and summarize experimental results altogether — they could use LLMs to do much of this work, meaning less time writing papers and more time doing experiments…(More)”.

The growing energy footprint of artificial intelligence


Paper by Alex de Vries: “Throughout 2022 and 2023, artificial intelligence (AI) has witnessed a period of rapid expansion and extensive, large-scale application. Prominent tech companies such as Alphabet and Microsoft significantly increased their support for AI in 2023, influenced by the successful launch of OpenAI’s ChatGPT, a conversational generative AI chatbot that reached 100 million users in an unprecedented 2 months. In response, Microsoft and Alphabet introduced their own chatbots, Bing Chat and Bard, respectively.

 This accelerated development raises concerns about the electricity consumption and potential environmental impact of AI and data centers. In recent years, data center electricity consumption has accounted for a relatively stable 1% of global electricity use, excluding cryptocurrency mining. Between 2010 and 2018, global data center electricity consumption may have increased by only 6%.

 There is increasing apprehension that the computational resources necessary to develop and maintain AI models and applications could cause a surge in data centers’ contribution to global electricity consumption.

This commentary explores initial research on AI electricity consumption and assesses the potential implications of widespread AI technology adoption on global data center electricity use. The piece discusses both pessimistic and optimistic scenarios and concludes with a cautionary note against embracing either extreme…(More)”.

The Dead Internet To Come


Essay by Robert Mariani: “On an ordinary morning, you cradle a steaming cup of coffee while scrolling through your social media feeds. You’re in your happy place, engaging with the thoughts and creations of countless individuals at your leisure.

But something feels off. There’s no proof, but your instincts are sure of it. For a while now, the microcelebrities on Twitter have been engaging with you more than they should be, more than they were a few months ago. You’ve noticed patterns in conversations that are beyond your conscious mind’s power to decipher; there’s a rhythm to trends and replies that did not exist before.

A vague dread grips you. Why is everything a little bit different now? The smallest details are wrong. Your favorite posters have vanished from all platforms. There haven’t been any new memes for some time, only recycled iterations of old ones. Influencers are coordinated in their talking points like puppets being pulled by the same strings. Your favorite niche YouTuber has only recently been posting new content with any regularity. Is this a message? Is this what schizophrenia is like?

Dread gives way to the cold stab of terrible certainty as it hits you: they aren’t people. They’re bots. The Internet is all bots. Under your nose, the Internet of real people has gradually shifted into a digital world of shadow puppets. They look like people, they act like people, but there are no people left. Well, there’s you and maybe a few others, but you can’t tell the difference, because the bots wear a million masks. You might be alone, and have been for a while. It’s a horror worse than blindness: the certainty that your vision is clear but there is no genuine world to be seen.

This is the world of the Internet after about 2016 — at least according to the Dead Internet Theory, whose defining description appeared in an online forum in 2021.The theory suggests a conspiracy to gaslight the entire world by replacing the user-powered Internet with an empty, AI-powered one populated by bot impostors. It explains why all the cool people get banned, why Internet culture has become so stale, why the top influencers are the worst ones, and why discourse cycles seem so mechanically uniform. The perpetrators are the usual suspects: the U.S. government trying to control public opinion and corporations trying to get us to buy more stuff…(More)”.

The contested role of AI ethics boards in smart societies: a step towards improvement based on board composition by sortition


Paper by Ludovico Giacomo Conti & Peter Seele: “The recent proliferation of AI scandals led private and public organisations to implement new ethics guidelines, introduce AI ethics boards, and list ethical principles. Nevertheless, some of these efforts remained a façade not backed by any substantive action. Such behaviour made the public question the legitimacy of the AI industry and prompted scholars to accuse the sector of ethicswashing, machinewashing, and ethics trivialisation—criticisms that spilt over to institutional AI ethics boards. To counter this widespread issue, contributions in the literature have proposed fixes that do not consider its systemic character and are based on a top-down, expert-centric governance. To fill this gap, we propose to make use of qualified informed lotteries: a two-step model that transposes the documented benefits of the ancient practice of sortition into the selection of AI ethics boards’ members and combines them with the advantages of a stakeholder-driven, participative, and deliberative bottom-up process typical of Citizens’ Assemblies. The model permits increasing the public’s legitimacy and participation in the decision-making process and its deliverables, curbing the industry’s over-influence and lobbying, and diminishing the instrumentalisation of ethics boards. We suggest that this sortition-based approach may provide a sound base for both public and private organisations in smart societies for constructing a decentralised, bottom-up, participative digital democracy…(More)”.

Generative AI, Jobs, and Policy Response


Paper by the Global Partnership on AI: “Generative AI and the Future of Work remains notably absent from the global AI governance dialogue. Given the transformative potential of this technology in the workplace, this oversight suggests a significant gap, especially considering the substantial implications this technology has for workers, economies and society at large. As interest grows in the effects of Generative AI on occupations, debates centre around roles being replaced or enhanced by technology. Yet there is an incognita, the “Big Unknown”, an important number of workers whose future depends on decisions yet to be made
In this brief, recent articles about the topic are surveyed with special attention to the “Big Unknown”. It is not a marginal number: nearly 9% of the workforce, or 281 million workers worldwide, are in this category. Unlike previous AI developments which focused on automating narrow tasks, Generative AI models possess the scope, versatility, and economic viability to impact jobs across multiple industries and at varying skill levels. Their ability to produce human-like outputs in areas like language, content creation and customer interaction, combined with rapid advancement and low deployment costs, suggest potential near-term impacts that are much broader and more abrupt than prior waves of AI. Governments, companies, and social partners should aim to minimize any potential negative effects from Generative AI technology in the world of work, as well as harness potential opportunities to support productivity growth and decent work. This brief presents concrete policy recommendations at the global and local level. These insights, are aimed to guide the discourse towards a balanced and fair integration of Generative AI in our professional landscape To navigate this uncertain landscape and ensure that the benefits of Generative AI are equitably distributed, we recommend 10 policy actions that could serve as a starting point for discussion and implementation…(More)”.

The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice


Paper by Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang: “Despite the growing consensus that stakeholders affected by AI systems should participate in their design, enormous variation and implicit disagreements exist among current approaches. For researchers and practitioners who are interested in taking a participatory approach to AI design and development, it remains challenging to assess the extent to which any participatory approach grants substantive agency to stakeholders. This article thus aims to ground what we dub the “participatory turn” in AI design by synthesizing existing theoretical literature on participation and through empirical investigation and critique of its current practices. Specifically, we derive a conceptual framework through synthesis of literature across technology design, political theory, and the social sciences that researchers and practitioners can leverage to evaluate approaches to participation in AI design. Additionally, we articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners. We use these empirical findings to understand the current state of participatory practice and subsequently provide guidance to better align participatory goals and methods in a way that accounts for practical constraints…(More)”.

Public Net Worth


Book by Jacob Soll, Willem Buiter, John Crompton, Ian Ball, and Dag Detter: “As individuals, we depend on the services that governments provide. Collectively, we look to them to tackle the big problems – from long-term climate and demographic change to short-term crises like pandemics or war.  This is very expensive, and is getting more so.

But governments don’t provide – or use – basic financial information that every business is required to maintain. They ignore the value of public assets and most liabilities. This leads to inefficiency and bad decision-making, and piles up problems for the future.

Governments need to create balance sheets that properly reflect assets and liabilities, and to understand their future obligations and revenue prospects. Net Worth – both today and for the future – should be the measure of financial strength and success.

Only if this information is put at the centre of government financial decision-making can the present challenges to public finances around the world be addressed effectively, and in a way that is fair to future generations.

The good news is that there are ways to deal with these problems and make government finances more resilient and fairer to future generations.

The facts, and the solutions, are non-partisan, and so is this book. Responsible leaders of any political persuasion need to understand the issues and the tools that can enable them to deliver policy within these constraints…(More)”.

Open: A Pan-ideological Panacea, a Free Floating Signifier


Paper by Andrea Liu: “Open” is a word that originated from FOSS (Free and Open Software movement) to mean a Commons-based, non-proprietary form of computer software development (Linux, Apache) based on a decentralized, poly-hierarchical, distributed labor model. But the word “open” has now acquired an unnerving over-elasticity, a word that means so many things that at times it appears meaningless. This essay is a rhetorical analysis (if not a deconstruction) of how the term “open” functions in digital culture, the promiscuity (if not gratuitousness) with which the term “open” is utilized in the wider society, and the sometimes blatantly contradictory ideologies a indiscriminately lumped together under this word…(More)”

Data Sandboxes: Managing the Open Data Spectrum


Primer by Uma Kalkar, Sampriti Saxena, and Stefaan Verhulst: “Opening up data offers opportunities to enhance governance, elevate public and private services, empower individuals, and bolster public well-being. However, achieving the delicate balance between open data access and the responsible use of sensitive and valuable information presents complex challenges. Data sandboxes are an emerging approach to balancing these needs.

In this white paper, The GovLab seeks to answer the following questions surrounding data sandboxes: What are data sandboxes? How can data sandboxes empower decision-makers to unlock the potential of open data while maintaining the necessary safeguards for data privacy and security? Can data sandboxes help decision-makers overcome barriers to data access and promote purposeful, informed data (re-)use?

The six characteristics of a data sandbox. Image by The GovLab.

After evaluating a series of case studies, we identified the following key findings:

  • Data sandboxes present six unique characteristics that make them a strong tool for facilitating open data and data re-use. These six characteristics are: controlled, secure, multi-sectoral and collaborative, high computing environments, temporal in nature, adaptable, and scalable.
  • Data sandboxes can be used for: pre-engagement assessment, data mesh enablement, rapid prototyping, familiarization, quality and privacy assurance, experimentation and ideation, white labeling and minimization, and maturing data insights.
  • There are many benefits to implementing data sandboxes. We found ten value propositions, such as: decreasing risk in accessing more sensitive data; enhancing data capacity; and fostering greater experimentation and innovation, to name a few.
  • When looking to implement a data sandbox, decision-makers should consider how they will attract and obtain high-quality, relevant data, keep the data fresh for accurate re-use, manage risks of data (re-)use, and translate and scale up sandbox solutions in real markets.
  • Advances in the use of the Internet of Things and Privacy Enhancing Technologies could help improve the creation, preparation, analysis, and security of data in a data sandbox. The development of these technologies, in parallel with European legislative measures such as the Digital Markets Act, the Data Act and the Data Governance Act, can improve the way data is unlocked in a data sandbox, improving trust and encouraging data (re-)use initiatives…(More)” (FULL PRIMER)”

Seven routes to experimentation in policymaking: a guide to applied behavioural science methods


OECD Resource: “…offers guidelines and a visual roadmap to help policymakers choose the most fit-for-purpose evidence collection method for their specific policy challenge.

Source: Elaboration of the authors: Varazzani, C., Emmerling. T., Brusoni, S., Fontanesi, L., and Tuomaila, H., (2023), “Seven routes to experimentation: A guide to applied behavioural science methods,” OECD Working Papers on Public Governance, OECD Publishing, Paris. Note: The authors elaborated the map based on a previous map ideated, researched, and designed by Laura Castro Soto, Judith Wagner, and Torben Emmerling (sevenroutes.com).

The seven applied behavioural science methods:

  • Randomised Controlled Trials (RCTs) are experiments that can demonstrate a causal relationship between an intervention and an outcome, by randomly assigning individuals to an intervention group and a control group.
  • A/B testing tests two or more manipulations (such as variants of a webpage) to assess which performs better in terms of a specific goal or metric.
  • Difference-in-Difference is an experimental method that estimates the causal effect of an intervention by comparing changes in outcomes between an intervention group and a control group before and after the intervention.
  • Before-After studies assess the impact of an intervention or event by comparing outcomes or measurements before and after its occurrence, without a control group.
  • Longitudinal studies collect data from the same individuals or groups over an extended period to assess trends over time.
  • Correlational studies help to investigate the relationship between two or more variables to determine if they vary together (without implying causation).
  • Qualitative studies explore the underlying meanings and nuances of a phenomenon through interviews, focus group sessions, or other exploratory methods based on conversations and observations…(More)”.