AI could choke on its own exhaust as it fills the web


Article by Ina Fried and Scott Rosenberg: “Scott RosenbergThe internet is beginning to fill up with more and more content generated by artificial intelligence rather than human beings, posing weird new dangers both to human society and to the AI programs themselves.

What’s happening: Experts estimate that AI-generated content could account for as much as 90% of information on the internet in a few years’ time, as ChatGPT, Dall-E and similar programs spill torrents of verbiage and images into online spaces.

  • That’s happening in a world that hasn’t yet figured out how to reliably label AI-generated output and differentiate it from human-created content.

The danger to human society is the now-familiar problem of information overload and degradation.

  • AI turbocharges the ability to create mountains of new content while it undermines the ability to check that material for reliability and recycles biases and errors in the data that was used to train it.
  • There’s also widespread fear that AI could undermine the jobs of people who create content today, from artists and performers to journalists, editors and publishers. The current strike by Hollywood actors and writers underlines this risk.

The danger to AI itself is newer and stranger. A raft of recent research papers have introduced a novel lexicon of potential AI disorders that are just coming into view as the technology is more widely deployed and used.

  • Model collapse” is researchers’ name for what happens to generative AI models, like OpenAI’s GPT-3 and GPT-4, when they’re trained using data produced by other AIs rather than human beings.
  • Feed a model enough of this “synthetic” data, and the quality of the AI’s answers can rapidly deteriorate, as the systems lock in on the most probable word choices and discard the “tail” choices that keep their output interesting.
  • Model Autophagy Disorder, or MAD, is how one set of researchers at Rice and Stanford universities dubbed the result of AI consuming its own products.
  • “Habsburg AI” is what another researcher earlier this year labeled the phenomenon, likening it to inbreeding: “A system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features.”…(More)”.

The Coming Wave


Book by Mustafa Suleyman and Michael Bhaskar: “Soon you will live surrounded by AIs. They will organise your life, operate your business, and run core government services. You will live in a world of DNA printers and quantum computers, engineered pathogens and autonomous weapons, robot assistants and abundant energy.

None of us are prepared.

As co-founder of the pioneering AI company DeepMind, part of Google, Mustafa Suleyman has been at the centre of this revolution. The coming decade, he argues, will be defined by this wave of powerful, fast-proliferating new technologies.

In The Coming Wave, Suleyman shows how these forces will create immense prosperity but also threaten the nation-state, the foundation of global order. As our fragile governments sleepwalk into disaster, we face an existential dilemma: unprecedented harms on one side and the threat of overbearing surveillance on the other…(More)”.

Regulation of Artificial Intelligence Around the World


Report by the Law Library of Congress: “…provides a list of jurisdictions in the world where legislation that specifically refers to artificial intelligence (AI) or systems utilizing AI have been adopted or proposed. Researchers of the Law Library surveyed all jurisdictions in their research portfolios to find such legislation, and those encountered have been compiled in the annexed list with citations and brief descriptions of the relevant legislation. Only adopted or proposed instruments that have legal effect are reported for national and subnational jurisdictions and the European Union (EU); guidance or policy documents that have no legal effect are not included for these jurisdictions. Major international organizations have also been surveyed and documents adopted or proposed by these organizations that specifically refer to AI are reported in the list…(More)”.

Integrating AI into Urban Planning Workflows: Democracy Over Authoritarianism


Essay by Tyler Hinkle: “As AI tools become integrated into urban planning, a dual narrative of promise and potential pitfalls emerges. These tools offer unprecedented efficiency, creativity, and data analysis, yet if not guided by ethical considerations, they could inadvertently lead to exclusion, manipulation, and surveillance.

While AI, exemplified by tools like NovelAI, holds the potential to aggregate and synthesize public input, there’s a risk of suppressing genuine human voices in favor of algorithmic consensus. This could create a future urban landscape devoid of cultural depth and diversity, echoing historical authoritarianism.

In a potential dystopian scenario, an AI-based planning software gains access to all smart city devices, amassing data to reshape communities without consulting their residents. This data-driven transformation, devoid of human input, risks eroding the essence of community identity, autonomy, and shared decision-making. Imagine AI altering traffic flow, adjusting public transportation routes, or even redesigning public spaces based solely on data patterns, disregarding the unique needs and desires of the people who call that community home.

However, an optimistic approach guided by ethical principles can pave the way for a brighter future. Integrating AI with democratic ideals, akin to Fishkin’s deliberative democracy, can amplify citizens’ voices rather than replacing them. AI-driven deliberation can become a powerful vehicle for community engagement, transforming Arnstein’s ladder of citizen participation into a true instrument of empowerment. In addition, echoing the calls for alignment to be addresses holistically for AI, there will be alignment issues with AI as it becomes integrated into urban planning. We must take the time to ensure AI is properly aligned so it is a tool to help communities and not hurt them.

By treading carefully and embedding ethical considerations at the core, we can unleash AI’s potential to construct communities that are efficient, diverse, and resilient, while ensuring that democratic values remain paramount…(More)”.

Advancing Environmental Justice with AI


Article by Justina Nixon-Saintil: “Given its capacity to innovate climate solutions, the technology sector could provide the tools we need to understand, mitigate, and even reverse the damaging effects of global warming. In fact, addressing longstanding environmental injustices requires these companies to put the newest and most effective technologies into the hands of those on the front lines of the climate crisis.

Tools that harness the power of artificial intelligence, in particular, could offer unprecedented access to accurate information and prediction, enabling communities to learn from and adapt to climate challenges in real time. The IBM Sustainability Accelerator, which we launched in 2022, is at the forefront of this effort, supporting the development and scaling of projects such as the Deltares Aquality App, an AI-powered tool that helps farmers assess and improve water quality. As a result, farmers can grow crops more sustainably, prevent runoff pollution, and protect biodiversity.

Consider also the challenges that smallholder farmers face, such as rising costs, the difficulty of competing with larger producers that have better tools and technology, and, of course, the devastating effects of climate change on biodiversity and weather patterns. Accurate information, especially about soil conditions and water availability, can help them address these issues, but has historically been hard to obtain…(More)”.

AI and new standards promise to make scientific data more useful by making it reusable and accessible


Article by Bradley Wade Bishop: “…AI makes it highly desirable for any data to be machine-actionable – that is, usable by machines without human intervention. Now, scholars can consider machines not only as tools but also as potential autonomous data reusers and collaborators.

The key to machine-actionable data is metadata. Metadata are the descriptions scientists set for their data and may include elements such as creator, date, coverage and subject. Minimal metadata is minimally useful, but correct and complete standardized metadata makes data more useful for both people and machines.

It takes a cadre of research data managers and librarians to make machine-actionable data a reality. These information professionals work to facilitate communication between scientists and systems by ensuring the quality, completeness and consistency of shared data.

The FAIR data principles, created by a group of researchers called FORCE11 in 2016 and used across the world, provide guidance on how to enable data reuse by machines and humans. FAIR data is findable, accessible, interoperable and reusable – meaning it has robust and complete metadata.

In the past, I’ve studied how scientists discover and reuse data. I found that scientists tend to use mental shortcuts when they’re looking for data – for example, they may go back to familiar and trusted sources or search for certain key terms they’ve used before. Ideally, my team could build this decision-making process of experts and remove as many biases as possible to improve AI. The automation of these mental shortcuts should reduce the time-consuming chore of locating the right data…(More)”.

How Will the State Think With the Assistance of ChatGPT? The Case of Customs as an Example of Generative Artificial Intelligence in Public Administrations


Paper by Thomas Cantens: “…discusses the implications of Generative Artificial Intelligence (GAI) in public administrations and the specific questions it raises compared to specialized and « numerical » AI, based on the example of Customs and the experience of the World Customs Organization in the field of AI and data strategy implementation in Member countries.

At the organizational level, the advantages of GAI include cost reduction through internalization of tasks, uniformity and correctness of administrative language, access to broad knowledge, and potential paradigm shifts in fraud detection. At this level, the paper highlights three facts that distinguish GAI from specialized AI : i) GAI is less associated to decision-making process than specialized AI in public administrations so far, ii) the risks usually associated with GAI are often similar to those previously associated with specialized AI, but, while certain risks remain pertinent, others lose significance due to the constraints imposed by the inherent limitations of GAI technology itself when implemented in public administrations, iii) training data corpus for GAI becomes a strategic asset for public administrations, maybe more than the algorithms themselves, which was not the case for specialized AI.

At the individual level, the paper emphasizes the “language-centric” nature of GAI in contrast to “number-centric” AI systems implemented within public administrations up until now. It discusses the risks of replacement or enslavement of civil servants to the machines by exploring the transformative impact of GAI on the intellectual production of the State. The paper pleads for the development of critical vigilance and critical thinking as specific skills for civil servants who are highly specialized and will have to think with the assistance of a machine that is eclectic by nature…(More)”.

Unleashing possibilities, ignoring risks: Why we need tools to manage AI’s impact on jobs


Article by Katya Klinova and Anton Korinek: “…Predicting the effects of a new technology on labor demand is difficult and involves significant uncertainty. Some would argue that, given the uncertainty, we should let the “invisible hand” of the market decide our technological destiny. But we believe that the difficulty of answering the question “Who is going to benefit and who is going to lose out?” should not serve as an excuse for never posing the question in the first place. As we emphasized, the incentives for cutting labor costs are artificially inflated. Moreover, the invisible hand theorem does not hold for technological change. Therefore, a failure to investigate the distribution of benefits and costs of AI risks invites a future with too many “so-so” uses of AI—uses that concentrate gains while distributing the costs. Although predictions about the downstream impacts of AI systems will always involve some uncertainty, they are nonetheless useful to spot applications of AI that pose the greatest risks to labor early on and to channel the potential of AI where society needs it the most.

In today’s society, the labor market serves as a primary mechanism for distributing income as well as for providing people with a sense of meaning, community, and purpose. It has been documented that job loss can lead to regional decline, a rise in “deaths of despair,” addiction and mental health problems. The path that we lay out aims to prevent abrupt job losses or declines in job quality on the national and global scale, providing an additional tool for managing the pace and shape of AI-driven labor market transformation.

Nonetheless, we do not want to rule out the possibility that humanity may eventually be much happier in a world where machines do a lot more economically valuable work. Even despite our best efforts to manage the pace and shape of AI labor market disruption through regulation and worker-centric practices, we may still face a future with significantly reduced human labor demand. Should the demand for human labor decrease permanently with the advancement of AI, timely policy responses will be needed to address both the lost incomes as well as the lost sense of meaning and purpose. In the absence of significant efforts to distribute the gains from advanced AI more broadly, the possible devaluation of human labor would deeply impact income distribution and democratic institutions’ sustainability. While a jobless future is not guaranteed, its mere possibility and the resulting potential societal repercussions demand serious consideration. One promising proposal to consider is to create an insurance policy against a dramatic decrease in the demand for human labor that automatically kicks in if the share of income received by workers declines, for example a “seed” Universal Basic Income that starts at a very small level and remains unchanged if workers continue to prosper but automatically rises if there is large scale worker displacement…(More)”.

The Age of Prediction: Algorithms, AI, and the Shifting Shadows of Risk


Book by Igor Tulchinsky and Christopher E. Mason: “… about two powerful, and symbiotic, trends: the rapid development and use of artificial intelligence and big data to enhance prediction, as well as the often paradoxical effects of these better predictions on our understanding of risk and the ways we live. Beginning with dramatic advances in quantitative investing and precision medicine, this book explores how predictive technology is quietly reshaping our world in fundamental ways, from crime fighting and warfare to monitoring individual health and elections.

As prediction grows more robust, it also alters the nature of the accompanying risk, setting up unintended and unexpected consequences. The Age of Prediction details how predictive certainties can bring about complacency or even an increase in risks—genomic analysis might lead to unhealthier lifestyles or a GPS might encourage less attentive driving. With greater predictability also comes a degree of mystery, and the authors ask how narrower risks might affect markets, insurance, or risk tolerance generally. Can we ever reduce risk to zero? Should we even try? This book lays an intriguing groundwork for answering these fundamental questions and maps out the latest tools and technologies that power these projections into the future, sometimes using novel, cross-disciplinary tools to map out cancer growth, people’s medical risks, and stock dynamics…(More)”.

Do People Like Algorithms? A Research Strategy


Paper by Cass R. Sunstein and Lucia Reisch: “Do people like algorithms? In this study, intended as a promissory note and a description of a research strategy, we offer the following highly preliminary findings. (1) In a simple choice between a human being and an algorithm, across diverse settings and without information about the human being or the algorithm, people in our tested groups are about equally divided in their preference. (2) When people are given a very brief account of the data on which an algorithm relies, there is a large shift in favor of the algorithm over the human being. (3) When people are given a very brief account of the experience of the relevant human being, without an account of the data on which the relevant algorithm relies, there is a moderate shift in favor of the human being. (4) When people are given both (a) a very brief account of the experience of the relevant human being and (b) a very brief account of the data on which the relevant algorithm relies, there is a large shift in favor of the algorithm over the human being. One lesson is that in the tested groups, at least one-third of people seem to have a clear preference for either a human being or an algorithm – a preference that is unaffected by brief information that seems to favor one or the other. Another lesson is that a brief account of the data on which an algorithm relies does have a significant effect on a large percentage of the tested groups, whether or not people are also given positive information about the human alternative. Across the various surveys, we do not find persistent demographic differences, with one exception: men appear to like algorithms more than women do. These initial findings are meant as proof of concept, or more accurately as a suggestion of concept, intended to inform a series of larger and more systematic studies of whether and when people prefer to rely on algorithms or human beings, and also of international and demographic differences…(More)”.