How existential risk became the biggest meme in AI


Article by Will Douglas Heaven: “Who’s afraid of the big bad bots? A lot of people, it seems. The number of high-profile names that have now made public pronouncements or signed open letters warning of the catastrophic dangers of artificial intelligence is striking.

Hundreds of scientists, business leaders, and policymakers have spoken up, from deep learning pioneers Geoffrey Hinton and Yoshua Bengio to the CEOs of top AI firms, such as Sam Altman and Demis Hassabis, to the California congressman Ted Lieu and the former president of Estonia Kersti Kaljulaid.

The starkest assertion, signed by all those figures and many more, is a 22-word statement put out two weeks ago by the Center for AI Safety (CAIS), an agenda-pushing research organization based in San Francisco. It proclaims: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The wording is deliberate. “If we were going for a Rorschach-test type of statement, we would have said ‘existential risk’ because that can mean a lot of things to a lot of different people,” says CAIS director Dan Hendrycks. But they wanted to be clear: this was not about tanking the economy. “That’s why we went with ‘risk of extinction’ even though a lot of us are concerned with various other risks as well,” says Hendrycks.

We’ve been here before: AI doom follows AI hype. But this time feels different. The Overton window has shifted. What were once extreme views are now mainstream talking points, grabbing not only headlines but the attention of world leaders. “The chorus of voices raising concerns about AI has simply gotten too loud to be ignored,” says Jenna Burrell, director of research at Data and Society, an organization that studies the social implications of technology.

What’s going on? Has AI really become (more) dangerous? And why are the people who ushered in this tech now the ones raising the alarm?   

It’s true that these views split the field. Last week, Yann LeCun, chief scientist at Meta and joint recipient with Hinton and Bengio of the 2018 Turing Award, called the doomerism “preposterously ridiculous.” Aidan Gomez, CEO of the AI firm Cohere, said it was “an absurd use of our time.”

Others scoff too. “There’s no more evidence now than there was in 1950 that AI is going to pose these existential risks,” says Signal president Meredith Whittaker, who is cofounder and former director of the AI Now Institute, a research lab that studies the social and policy implications of artificial intelligence. “Ghost stories are contagious—it’s really exciting and stimulating to be afraid.”

“It is also a way to skim over everything that’s happening in the present day,” says Burrell. “It suggests that we haven’t seen real or serious harm yet.”…(More)”.

Privacy-enhancing technologies (PETs)


Report by the Information Commissioner’s Office (UK): “This guidance discusses privacy-enhancing technologies (PETs) in detail. Read it if you have questions not answered in the Guide, or if you need a deeper understanding to help you apply PETs in practice.

The first part of the guidance is aimed at DPOs (data protection officers) and those with specific data protection responsibilities in larger organisations. It focuses on how PETs can help you achieve compliance with data protection law.

The second part is intended for a more technical audience, and for DPOs who want to understand more detail about the types of PETs that are currently available. It gives a brief introduction to eight types of PETs and explains their risks and benefits…(More)”.

How Does Data Access Shape Science?


Paper by Abhishek Nagaraj & Matteo Tranchero: “This study examines the impact of access to confidential administrative data on the rate, direction, and policy relevance of economics research. To study this question, we exploit the progressive geographic expansion of the U.S. Census Bureau’s Federal Statistical Research Data Centers (FSRDCs). FSRDCs boost data diffusion, help empirical researchers publish more articles in top outlets, and increase citation-weighted publications. Besides direct data usage, spillovers to non-adopters also drive this effect. Further, citations to exposed researchers in policy documents increase significantly. Our findings underscore the importance of data access for scientific progress and evidence-based policy formulation…(More)”.

Local Data Spaces: Leveraging trusted research environments for secure location-based policy research


Paper by Jacob L. Macdonald, Mark A. Green, Maurizio Gibin, Simon Leech, Alex Singleton and Paul Longely: “This work explores the use of Trusted Research Environments for the secure analysis of sensitive, record-level data on local coronavirus disease-2019 (COVID-19) inequalities and economic vulnerabilities. The Local Data Spaces (LDS) project was a targeted rapid response and cross-disciplinary collaborative initiative using the Office for National Statistics’ Secure Research Service for localized comparison and analysis of health and economic outcomes over the course of the COVID-19 pandemic. Embedded researchers worked on co-producing a range of locally focused insights and reports built on secure secondary data and made appropriately open and available to the public and all local stakeholders for wider use. With secure infrastructure and overall data governance practices in place, accredited researchers were able to access a wealth of detailed data and resources to facilitate more targeted local policy analysis. Working with data within such infrastructure as part of a larger research project involved advanced planning and coordination to be efficient. As new and novel granular data resources become securely available (e.g., record-level administrative digital health records or consumer data), a range of local policy insights can be gained across issues of public health or local economic vitality. Many of these new forms of data however often come with a large degree of sensitivity around issues of personal identifiability and how the data is used for public-facing research and require secure and responsible use. Learning to work appropriately with secure data and research environments can open up many avenues for collaboration and analysis…(More)”

TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI


Paper by Andrew Critch and Stuart Russell: “While several recent works have identified societal-scale and extinction-level risks to humanity arising from artificial intelligence, few have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive taxonomies are possible, and some are useful — particularly if they reveal new risks or practical approaches to safety. This paper explores a taxonomy based on accountability: whose actions lead to the risk, are the actors unified, and are they deliberate? We also provide stories to illustrate how the various risk types could each play out, including risks arising from unanticipated interactions of many AI systems, as well as risks from deliberate misuse, for which combined technical and policy solutions are indicated…(More)”.

An algorithm intended to reduce poverty in Jordan disqualifies people in need


Article by Tate Ryan-Mosley: “An algorithm funded by the World Bank to determine which families should get financial assistance in Jordan likely excludes people who should qualify, according to an investigation published this morning by Human Rights Watch. 

The algorithmic system, called Takaful, ranks families applying for aid from least poor to poorest using a secret calculus that assigns weights to 57 socioeconomic indicators. Applicants say that the calculus is not reflective of reality, however, and oversimplifies people’s economic situation, sometimes inaccurately or unfairly. Takaful has cost over $1 billion, and the World Bank is funding similar projects in eight other countries in the Middle East and Africa. 

Human Rights Watch identified several fundamental problems with the algorithmic system that resulted in bias and inaccuracies. Applicants are asked how much water and electricity they consume, for example, as two of the indicators that feed into the ranking system. The report’s authors conclude that these are not necessarily reliable indicators of poverty. Some families interviewed believed the fact that they owned a car affected their ranking, even if the car was old and necessary for transportation to work. 

The report reads, “This veneer of statistical objectivity masks a more complicated reality: the economic pressures that people endure and the ways they struggle to get by are frequently invisible to the algorithm.”..(More)”.

Understanding the relationship between informal public transport and economic vulnerability in Dar es Salaam


WhereIsMyTransport Case Study: “In most African cities, formal public transport—such as government-run or funded bus and rail networks—has limited coverage and fails to meet overall mobility demand. As African cities grow and densify, planners are questioning whether these networks can serve the economically vulnerable communities who benefit most from public transport access to opportunities and services.

In the absence of formal public transport or private vehicles, low-income commuters have long relied on informal public transport—think tro tros in Accra, boda bodas in Kampala, danfos in Lagos—to meet their mobility needs. Yet there is little reliable data on the relationship between informal public transport and economic vulnerability in and around Africa’s cities, making it challenging to understand:

  • Which communities are the most vulnerable?
  • What opportunities and services do people typically attempt to access?
  • What routes do informal public transport operators follow?
  • What are the occupation and gender-related impacts?

Addressing these questions benefits from combining data assets. For example, pairing data on informal public transport coverage with data on the socioeconomic characteristics of the communities that rely on this type of transport…(More)”.

Data for Environmentally Sustainable and Inclusive Urban Mobility


Report by Anusha Chitturi and Robert Puentes: “Data on passenger movements, vehicle fleets, fare payments, and transportation infrastructure has immense potential to inform cities to better plan, regulate, and enforce their urban mobility systems. This report specifically examines the opportunities that exist for U.S. cities to use mobility data – made available through adoption of new mobility services and data-based technologies – to improve transportation’s environmental sustainability, accessibility, and equity. Cities are advancing transportation sustainability in several ways, including making trips more efficient, minimizing the use of single-occupancy vehicles, prioritizing sustainable modes of transport, and enabling a transition to zero and low-emission fuels. They are improving accessibility and equity by planning for and offering a range of transportation services that serve all people, irrespective of their physical abilities, economic power, and geographic location.
Data sharing is an important instrument for furthering these mobility outcomes. Ridership data from ride-hailing companies, for example, can inform cities about whether they are replacing sustainable transport trips, resulting in an increase in congestion and emissions; such data can further be used for designing targeted emission-reduction programs such as a congestion fee program, or for planning high-quality sustainable transport services to reduce car trips. Similarly, mobility data can be used to plan on-demand services in certain transit-poor neighborhoods, where fixed transit services don’t make financial sense due to low urban densities. Sharing mobility data, however, often comes with certain risks,..(More)”.

Critical factors influencing information disclosure in public organisations


Paper by Francisca Tejedo-Romero & Joaquim Filipe Ferraz Esteves Araujo: “Open government initiatives around the world and the passage of freedom of information laws are opening public organisations through information disclosure to ensure transparency and encourage citizen participation and engagement. At the municipal level, social, economic, and political factors are found to account for this trend. However, the findings on this issue are inconclusive and may differ from country to country. This paper contributes to this discussion by analysing a unitary country where the same set of laws and rules governs the constituent municipalities. It seeks to identify critical factors that affect the disclosure of municipal information. For this purpose, a longitudinal study was carried out over a period of 4 years using panel data methodology. The main conclusions seem to point to municipalities’ intention to increase the dissemination of information to reduce low levels of voter turnout and increase civic involvement and political participation. Municipalities governed by leftist parties and those that have high indebtedness are most likely to disclose information. Additionally, internet access has created new opportunities for citizens to access information, which exerts pressure for greater dissemination of information by municipalities. These findings are important to practitioners because they indicate the need to improve citizens’ access to the Internet and maintain information disclosure strategies beyond election periods…(More)”.

The A.I. Revolution Will Change Work. Nobody Agrees How.


Sarah Kessler in The New York Times: “In 2013, researchers at Oxford University published a startling number about the future of work: 47 percent of all United States jobs, they estimated, were “at risk” of automation “over some unspecified number of years, perhaps a decade or two.”

But a decade later, unemployment in the country is at record low levels. The tsunami of grim headlines back then — like “The Rich and Their Robots Are About to Make Half the World’s Jobs Disappear” — look wildly off the mark.

But the study’s authors say they didn’t actually mean to suggest doomsday was near. Instead, they were trying to describe what technology was capable of.

It was the first stab at what has become a long-running thought experiment, with think tanks, corporate research groups and economists publishing paper after paper to pinpoint how much work is “affected by” or “exposed to” technology.

In other words: If cost of the tools weren’t a factor, and the only goal was to automate as much human labor as possible, how much work could technology take over?

When the Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, were conducting their study, IBM Watson, a question-answering system powered by artificial intelligence, had just shocked the world by winning “Jeopardy!” Test versions of autonomous vehicles were circling roads for the first time. Now, a new wave of studies follows the rise of tools that use generative A.I.

In March, Goldman Sachs estimated that the technology behind popular A.I. tools such as DALL-E and ChatGPT could automate the equivalent of 300 million full-time jobs. Researchers at Open AI, the maker of those tools, and the University of Pennsylvania found that 80 percent of the U.S. work force could see an effect on at least 10 percent of their tasks.

“There’s tremendous uncertainty,” said David Autor, a professor of economics at the Massachusetts Institute of Technology, who has been studying technological change and the labor market for more than 20 years. “And people want to provide those answers.”

But what exactly does it mean to say that, for instance, the equivalent of 300 million full-time jobs could be affected by A. I.?

It depends, Mr. Autor said. “Affected could mean made better, made worse, disappeared, doubled.”…(More)”.