How existential risk became the biggest meme in AI


Article by Will Douglas Heaven: “Who’s afraid of the big bad bots? A lot of people, it seems. The number of high-profile names that have now made public pronouncements or signed open letters warning of the catastrophic dangers of artificial intelligence is striking.

Hundreds of scientists, business leaders, and policymakers have spoken up, from deep learning pioneers Geoffrey Hinton and Yoshua Bengio to the CEOs of top AI firms, such as Sam Altman and Demis Hassabis, to the California congressman Ted Lieu and the former president of Estonia Kersti Kaljulaid.

The starkest assertion, signed by all those figures and many more, is a 22-word statement put out two weeks ago by the Center for AI Safety (CAIS), an agenda-pushing research organization based in San Francisco. It proclaims: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The wording is deliberate. “If we were going for a Rorschach-test type of statement, we would have said ‘existential risk’ because that can mean a lot of things to a lot of different people,” says CAIS director Dan Hendrycks. But they wanted to be clear: this was not about tanking the economy. “That’s why we went with ‘risk of extinction’ even though a lot of us are concerned with various other risks as well,” says Hendrycks.

We’ve been here before: AI doom follows AI hype. But this time feels different. The Overton window has shifted. What were once extreme views are now mainstream talking points, grabbing not only headlines but the attention of world leaders. “The chorus of voices raising concerns about AI has simply gotten too loud to be ignored,” says Jenna Burrell, director of research at Data and Society, an organization that studies the social implications of technology.

What’s going on? Has AI really become (more) dangerous? And why are the people who ushered in this tech now the ones raising the alarm?   

It’s true that these views split the field. Last week, Yann LeCun, chief scientist at Meta and joint recipient with Hinton and Bengio of the 2018 Turing Award, called the doomerism “preposterously ridiculous.” Aidan Gomez, CEO of the AI firm Cohere, said it was “an absurd use of our time.”

Others scoff too. “There’s no more evidence now than there was in 1950 that AI is going to pose these existential risks,” says Signal president Meredith Whittaker, who is cofounder and former director of the AI Now Institute, a research lab that studies the social and policy implications of artificial intelligence. “Ghost stories are contagious—it’s really exciting and stimulating to be afraid.”

“It is also a way to skim over everything that’s happening in the present day,” says Burrell. “It suggests that we haven’t seen real or serious harm yet.”…(More)”.

Collective Intelligence to Co-Create the Cities of the Future: Proposal of an Evaluation Tool for Citizen Initiatives


Paper by Fanny E. Berigüete, Inma Rodriguez Cantalapiedra, Mariana Palumbo and Torsten Masseck: “Citizen initiatives (CIs), through their activities, have become a mechanism to promote empowerment, social inclusion, change of habits, and the transformation of neighbourhoods, influencing their sustainability, but how can this impact be measured? Currently, there are no tools that directly assess this impact, so our research seeks to describe and evaluate the contributions of CIs in a holistic and comprehensive way, respecting the versatility of their activities. This research proposes an evaluation system of 33 indicators distributed in 3 blocks: social cohesion, urban metabolism, and transformation potential, which can be applied through a questionnaire. This research applied different methods such as desk study, literature review, and case study analysis. The evaluation of case studies showed that the developed evaluation system well reflects the individual contribution of CIs to sensitive and important aspects of neighbourhoods, with a lesser or greater impact according to the activities they carry out and the holistic conception they have of sustainability. Further implementation and validation of the system in different contexts is needed, but it is a novel and interesting proposal that will favour decision making for the promotion of one or another type of initiative according to its benefits and the reality and needs of the neighbourhood…(More)”.

TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI


Paper by Andrew Critch and Stuart Russell: “While several recent works have identified societal-scale and extinction-level risks to humanity arising from artificial intelligence, few have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive taxonomies are possible, and some are useful — particularly if they reveal new risks or practical approaches to safety. This paper explores a taxonomy based on accountability: whose actions lead to the risk, are the actors unified, and are they deliberate? We also provide stories to illustrate how the various risk types could each play out, including risks arising from unanticipated interactions of many AI systems, as well as risks from deliberate misuse, for which combined technical and policy solutions are indicated…(More)”.

The A.I. Revolution Will Change Work. Nobody Agrees How.


Sarah Kessler in The New York Times: “In 2013, researchers at Oxford University published a startling number about the future of work: 47 percent of all United States jobs, they estimated, were “at risk” of automation “over some unspecified number of years, perhaps a decade or two.”

But a decade later, unemployment in the country is at record low levels. The tsunami of grim headlines back then — like “The Rich and Their Robots Are About to Make Half the World’s Jobs Disappear” — look wildly off the mark.

But the study’s authors say they didn’t actually mean to suggest doomsday was near. Instead, they were trying to describe what technology was capable of.

It was the first stab at what has become a long-running thought experiment, with think tanks, corporate research groups and economists publishing paper after paper to pinpoint how much work is “affected by” or “exposed to” technology.

In other words: If cost of the tools weren’t a factor, and the only goal was to automate as much human labor as possible, how much work could technology take over?

When the Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, were conducting their study, IBM Watson, a question-answering system powered by artificial intelligence, had just shocked the world by winning “Jeopardy!” Test versions of autonomous vehicles were circling roads for the first time. Now, a new wave of studies follows the rise of tools that use generative A.I.

In March, Goldman Sachs estimated that the technology behind popular A.I. tools such as DALL-E and ChatGPT could automate the equivalent of 300 million full-time jobs. Researchers at Open AI, the maker of those tools, and the University of Pennsylvania found that 80 percent of the U.S. work force could see an effect on at least 10 percent of their tasks.

“There’s tremendous uncertainty,” said David Autor, a professor of economics at the Massachusetts Institute of Technology, who has been studying technological change and the labor market for more than 20 years. “And people want to provide those answers.”

But what exactly does it mean to say that, for instance, the equivalent of 300 million full-time jobs could be affected by A. I.?

It depends, Mr. Autor said. “Affected could mean made better, made worse, disappeared, doubled.”…(More)”.

Politicians love to appeal to common sense – but does it trump expertise?


Essay by Magda Osman: “Politicians love to talk about the benefits of “common sense” – often by pitting it against the words of “experts and elites”. But what is common sense? Why do politicians love it so much? And is there any evidence that it ever trumps expertise? Psychology provides a clue.

We often view common sense as an authority of collective knowledge that is universal and constant, unlike expertise. By appealing to the common sense of your listeners, you therefore end up on their side, and squarely against the side of the “experts”. But this argument, like an old sock, is full of holes.

Experts have gained knowledge and experience in a given speciality. In which case politicians are experts as well. This means a false dichotomy is created between the “them” (let’s say scientific experts) and “us” (non-expert mouthpieces of the people).

Common sense is broadly defined in research as a shared set of beliefs and approaches to thinking about the world. For example, common sense is often used to justify that what we believe is right or wrong, without coming up with evidence.

But common sense isn’t independent of scientific and technological discoveries. Common sense versus scientific beliefs is therefore also a false dichotomy. Our “common” beliefs are informed by, and inform, scientific and technology discoveries…

The idea that common sense is universal and self-evident because it reflects the collective wisdom of experience – and so can be contrasted with scientific discoveries that are constantly changing and updated – is also false. And the same goes for the argument that non-experts tend to view the world the same way through shared beliefs, while scientists never seem to agree on anything.

Just as scientific discoveries change, common sense beliefs change over time and across cultures. They can also be contradictory: we are told “quit while you are ahead” but also “winners never quit”, and “better safe than sorry” but “nothing ventured nothing gained”…(More)”

From Ethics to Law: Why, When, and How to Regulate AI


Paper by Simon Chesterman: “The past decade has seen a proliferation of guides, frameworks, and principles put forward by states, industry, inter- and non-governmental organizations to address matters of AI ethics. These diverse efforts have led to a broad consensus on what norms might govern AI. Far less energy has gone into determining how these might be implemented — or if they are even necessary. This chapter focuses on the intersection of ethics and law, in particular discussing why regulation is necessary, when regulatory changes should be made, and how it might work in practice. Two specific areas for law reform address the weaponization and victimization of AI. Regulations aimed at general AI are particularly difficult in that they confront many ‘unknown unknowns’, but the threat of uncontrollable or uncontainable AI became more widely discussed with the spread of large language models such as ChatGPT in 2023. Additionally, however, there will be a need to prohibit some conduct in which increasingly lifelike machines are the victims — comparable, perhaps, to animal cruelty laws…(More)”

“My sex-related data is more sensitive than my financial data and I want the same level of security and privacy”: User Risk Perceptions and Protective Actions in Female-oriented Technologies


Paper by Maryam Mehrnezhad, and Teresa Almeida: “The digitalization of the reproductive body has engaged myriads of cutting-edge technologies in supporting people to know and tackle their intimate health. Generally understood as female technologies (aka female-oriented technologies or ‘FemTech’), these products and systems collect a wide range of intimate data which are processed, transferred, saved and shared with other parties. In this paper, we explore how the “data-hungry” nature of this industry and the lack of proper safeguarding mechanisms, standards, and regulations for vulnerable data can lead to complex harms or faint agentic potential. We adopted mixed methods in exploring users’ understanding of the security and privacy (SP) of these technologies. Our findings show that while users can speculate the range of harms and risks associated with these technologies, they are not equipped and provided with the technological skills to protect themselves against such risks. We discuss a number of approaches, including participatory threat modelling and SP by design, in the context of this work and conclude that such approaches are critical to protect users in these sensitive systems…(More)”.

Atlas of the Senseable City


Book by Antoine Picon and Carlo Ratti: “What have smart technologies taught us about cities? What lessons can we learn from today’s urbanites to make better places to live? Antoine Picon and Carlo Ratti argue that the answers are in the maps we make. For centuries, we have relied on maps to navigate the enormity of the city. Now, as the physical world combines with the digital world, we need a new generation of maps to navigate the city of tomorrow. Pervasive sensors allow anyone to visualize cities in entirely new ways—ebbs and flows of pollution, traffic, and internet connectivity.
 
This book explores how the growth of digital mapping, spurred by sensing technologies, is affecting cities and daily lives. It examines how new cartographic possibilities aid urban planners, technicians, politicians, and administrators; how digitally mapped cities could reveal ways to make cities smarter and more efficient; how monitoring urbanites has political and social repercussions; and how the proliferation of open-source maps and collaborative platforms can aid activists and vulnerable populations. With its beautiful, accessible presentation of cutting-edge research, this book makes it easy for readers to understand the stakes of the new information age—and appreciate the timeless power of the city…(More)”

Opportunities and Challenges in Reusing Public Genomics Data


Introduction to Special Issue by Mahmoud Ahmed and Deok Ryong Kim: “Genomics data is accumulating in public repositories at an ever-increasing rate. Large consortia and individual labs continue to probe animal and plant tissue and cell cultures, generating vast amounts of data using established and novel technologies. The human genome project kickstarted the era of systems biology (1, 2). Ambitious projects followed to characterize non-coding regions, variations across species, and between populations (3, 4, 5). The cost reduction allowed individual labs to generate numerous smaller high-throughput datasets (6, 7, 8, 9). As a result, the scientific community should consider strategies to overcome the challenges and maximize the opportunities to use these resources for research and the public good. In this collection, we will elicit opinions and perspectives from researchers in the field on the opportunities and challenges of reusing public genomics data. The articles in this research topic converge on the need for data sharing while acknowledging the challenges that come with it. Two articles defined and highlighted the distinction between data and metadata. The characteristic of each should be considered when designing optimal sharing strategies. One article focuses on the specific issues surrounding the sharing of genomics interval data, and another on balancing the need for protecting pediatric rights and the sharing benefits.

The definition of what counts as data is itself a moving target. As technology advances, data can be produced in more ways and from novel sources. Events of recent years have highlighted this fact. “The pandemic has underscored the urgent need to recognize health data as a global public good with mechanisms to facilitate rapid data sharing and governance,” Schwalbe and colleagues (2020). The challenges facing these mechanisms could be technical, economic, legal, or political. Defining what data is and its type, therefore, is necessary to overcome these barriers because “the mechanisms to facilitate data sharing are often specific to data types.” Unlike genomics data, which has established platforms, sharing clinical data “remains in a nascent phase.” The article by Patrinos and colleagues (2022) considers the strong ethical imperative for protecting pediatric data while acknowledging the need not to overprotections. The authors discuss a model of consent for pediatric research that can balance the need to protect participants and generate health benefits.

Xue et al. (2023) focus on reusing genomic interval data. Identifying and retrieving the relevant data can be difficult, given the state of the repositories and the size of these data. Similarly, integrating interval data in reference genomes can be hard. The author calls for standardized formats for the data and the metadata to facilitate reuse.

Sheffield and colleagues (2023) highlight the distinction between data and metadata. Metadata describes the characteristics of the sample, experiment, and analysis. The nature of this information differs from that of the primary data in size, source, and ways of use. Therefore, an optimal strategy should consider these specific attributes for sharing metadata. Challenges specifics to sharing metadata include the need for standardized terms and formats, making it portable and easier to find.

We go beyond the reuse issue to highlight two other aspects that might increase the utility of available public data in Ahmed et al. (2023). These are curation and integration…(More)”.

The Power and Perils of the “Artificial Hand”: Considering AI Through the Ideas of Adam Smith


Speech by Gita Gopinath: “…Nowadays, it’s almost impossible to talk about economics without invoking Adam Smith. We take for granted many of his concepts, such as the division of labor and the invisible hand. Yet, at the time when he was writing, these ideas went against the grain. He wasn’t afraid to push boundaries and question established thinking.

Smith grappled with how to advance well-being and prosperity at a time of great change. The Industrial Revolution was ushering in new technologies that would revolutionize the nature of work, create winners and losers, and potentially transform society. But their impact wasn’t yet clear. The Wealth of Nations, for example, was published the same year James Watt unveiled his steam engine.

Today, we find ourselves at a similar inflection point, where a new technology, generative artificial intelligence, could change our lives in spectacular—and possibly existential—ways. It could even redefine what it means to be human.

Given the parallels between Adam Smith’s time and ours, I’d like to propose a thought experiment: If he were alive today, how would Adam Smith have responded to the emergence of this new “artificial hand”?…(More)”.