The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence

Paper by Erik Brynjolfsson: “In 1950, Alan Turing proposed an “imitation game” as the ultimate test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions are indistinguishable from those of a human. Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds.

But not all types of AI are human-like—in fact, many of the most powerful systems are very different from humans —and an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created. What’s more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers…(More)”

Artificial intelligence searches for the human touch

Madhumita Murgia at the Financial Times: “For many outside the tech world, “data” means soulless numbers. Perhaps it causes their eyes to glaze over with boredom. Whereas for computer scientists, data means rows upon rows of rich raw matter, there to be manipulated.

Yet the siren call of “big data” has been more muted recently. There is a dawning recognition that, in tech such as artificial intelligence, “data” equals human beings.

AI-driven algorithms are increasingly impinging upon our everyday lives. They assist in making decisions across a spectrum that ranges from advertising products to diagnosing medical conditions. It’s already clear that the impact of such systems cannot be understood simply by examining the underlying code or even the data used to build them. We must look to people for answers as well.

Two recent studies do exactly that. The first is an Ipsos Mori survey of more than 19,000 people across 28 countries on public attitudes to AI, the second a University of Tokyo study investigating Japanese people’s views on the morals and ethics of AI usage. By inviting those with lived experiences to participate, both capture the mood among those researching the impact of artificial intelligence.

The Ipsos Mori survey found that 60 per cent of adults expect that products and services using AI will profoundly change their daily lives in the next three to five years. Latin Americans in particular think AI will trigger changes in social needs such as education and employment, while Chinese respondents were most likely to believe it would change transportation and their homes.

The geographic and demographic differences in both surveys are revealing. Globally, about half said AI technology has more benefits than drawbacks, while two-thirds felt gloomy about its impact on their individual freedom and legal rights. But figures for different countries show a significant split within this. Citizens from the “global south”, a catch-all term for non-western countries, were much more likely to “have a positive outlook on the impact of AI-powered products and services in their lives”. Large majorities in China (76 per cent) and India (68 per cent) said they trusted AI companies. In contrast, only 35 per cent in the UK, France and US expressed similar trust.

In the University of Tokyo study, researchers discovered that women, older people and those with more subject knowledge were most wary of the risks of AI, perhaps an indicator of their own experiences with these systems. The Japanese mathematician Noriko Arai has, for instance, written about sexist and gender stereotypes encoded into “female” carer and receptionist robots in Japan.

The surveys underline the importance of AI designers recognising that we don’t all belong to one homogenous population, with the same understanding of the world. But they’re less insightful about why differences exist….(More)”.

Octagon Measurement: Public Attitudes toward AI Ethics

Paper by Yuko Ikkatai, Tilman Hartwig, Naohiro Takanashi & Hiromi M. Yokoyama: “Artificial intelligence (AI) is rapidly permeating our lives, but public attitudes toward AI ethics have only partially been investigated quantitatively. In this study, we focused on eight themes commonly shared in AI guidelines: “privacy,” “accountability,” “safety and security,” “transparency and explainability,” “fairness and non-discrimination,” “human control of technology,” “professional responsibility,” and “promotion of human values.” We investigated public attitudes toward AI ethics using four scenarios in Japan. Through an online questionnaire, we found that public disagreement/agreement with using AI varied depending on the scenario. For instance, anxiety over AI ethics was high for the scenario where AI was used with weaponry. Age was significantly related to the themes across the scenarios, but gender and understanding of AI differently related depending on the themes and scenarios. While the eight themes need to be carefully explained to the participants, our Octagon measurement may be useful for understanding how people feel about the risks of the technologies, especially AI, that are rapidly permeating society and what the problems might be…(More)”.

From Poisons to Antidotes: Algorithms as Democracy Boosters

Paper by Paolo Cavaliere and Graziella Romeo: “Under what conditions can artificial intelligence contribute to political processes without undermining their legitimacy? Thanks to the ever-growing availability of data and the increasing power of decision-making algorithms, the future of political institutions is unlikely to be anything similar to what we have known throughout the last century, possibly with Parliaments deprived of their traditional authority and public decision-making processes largely unaccountable. This paper discusses and challenges these concerns by suggesting a theoretical framework under which algorithmic decision-making is compatible with democracy and, most relevantly, can offer a viable solution to counter the rise of populist rhetoric in the governance arena. Such a framework is based on three pillars: a. understanding the civic issues that are subjected to automated decision-making; b. controlling the issues that are assigned to AI; and c. evaluating and challenging the outputs of algorithmic decision-making….(More)”.

Trove of unique health data sets could help AI predict medical conditions earlier

Madhumita Murgia at the Financial Times: “…Ziad Obermeyer, a physician and machine learning scientist at the University of California, Berkeley, launched Nightingale Open Science last month — a treasure trove of unique medical data sets, each curated around an unsolved medical mystery that artificial intelligence could help to solve.

The data sets, released after the project received $2m of funding from former Google chief executive Eric Schmidt, could help to train computer algorithms to predict medical conditions earlier, triage better and save lives.

The data include 40 terabytes of medical imagery, such as X-rays, electrocardiogram waveforms and pathology specimens, from patients with a range of conditions, including high-risk breast cancer, sudden cardiac arrest, fractures and Covid-19. Each image is labelled with the patient’s medical outcomes, such as the stage of breast cancer and whether it resulted in death, or whether a Covid patient needed a ventilator.

Obermeyer has made the data sets free to use and mainly worked with hospitals in the US and Taiwan to build them over two years. He plans to expand this to Kenya and Lebanon in the coming months to reflect as much medical diversity as possible.

“Nothing exists like it,” said Obermeyer, who announced the new project in December alongside colleagues at NeurIPS, the global academic conference for artificial intelligence. “What sets this apart from anything available online is the data sets are labelled with the ‘ground truth’, which means with what really happened to a patient and not just a doctor’s opinion.”…

The Nightingale data sets were among dozens proposed this year at NeurIPS.

Other projects included a speech data set of Mandarin and eight subdialects recorded by 27,000 speakers in 34 cities in China; the largest audio data set of Covid respiratory sounds, such as breathing, coughing and voice recordings, from more than 36,000 participants to help screen for the disease; and a data set of satellite images covering the entire country of South Africa from 2006 to 2017, divided and labelled by neighbourhood, to study the social effects of spatial apartheid.

Elaine Nsoesie, a computational epidemiologist at the Boston University School of Public Health, said new types of data could also help with studying the spread of diseases in diverse locations, as people from different cultures react differently to illnesses.

She said her grandmother in Cameroon, for example, might think differently than Americans do about health. “If someone had an influenza-like illness in Cameroon, they may be looking for traditional, herbal treatments or home remedies, compared to drugs or different home remedies in the US.”

Computer scientists Serena Yeung and Joaquin Vanschoren, who proposed that research to build new data sets should be exchanged at NeurIPS, pointed out that the vast majority of the AI community still cannot find good data sets to evaluate their algorithms. This meant that AI researchers were still turning to data that were potentially “plagued with bias”, they said. “There are no good models without good data.”…(More)”.

The new machinery of government: using machine technology in administrative decision-making

Report by New South Wales Ombudsman: “There are many situations in which government agencies could use appropriately-designed machine technologies to assist in the exercise of their functions, which would be compatible with lawful and appropriate conduct. Indeed, in some instances machine technology may improve aspects of good administrative conduct – such as accuracy and consistency in decision-making, as well as mitigating the risk of individual human bias.

However, if machine technology is designed and used in a way that does not accord with administrative law and associated principles of good administrative practice, then its use could constitute or involve maladministration. It could also result in legal challenges, including a risk that administrative decisions or actions may later be held by a court to have been unlawful or invalid.

The New South Wales Ombudsman was prompted to prepare this report after becoming aware of one agency (Revenue NSW) using machine technology for the performance of a discretionary statutory function (the garnisheeing of unpaid fine debts from individuals’ bank accounts), in a way that was having a significant impact on individuals, many of whom were already in situations of financial vulnerability.

The Ombudsman’s experience with Revenue NSW, and a scan of the government’s published policies on the use of artificial intelligence and other digital technologies, suggests that there may be inadequate attention being given to fundamental aspects of public law that are relevant to machine technology adoption….(More)”

Empowering AI Leadership: AI C-Suite Toolkit

Toolkit by the World Economic Forum: “Artificial intelligence (AI) is one of the most important technologies for business, the economy and society and a driving force behind the Fourth Industrial Revolution. C-suite executives need to understand its possibilities and risks. This requires a multifaceted approach and holistic grasp of AI, spanning technical, organizational, regulatory, societal and also philosophical aspects. This toolkit provides a one-stop place for corporate executives to identify and understand the multiple and complex issues that AI raises for their business and society. It provides a practical set of tools to help them comprehend AI’s impact on their roles, ask the right questions, identify the key trade-offs and make informed decisions on AI strategy, projects and implementations…(More)”.

The AI Carbon Footprint and Responsibilities of AI Scientists

Paper by Guglielmo Tamburrini: “This article examines ethical implications of the growing AI carbon footprint, focusing on the fair distribution of prospective responsibilities among groups of involved actors. First, major groups of involved actors are identified, including AI scientists, AI industry, and AI infrastructure providers, from datacenters to electrical energy suppliers. Second, responsibilities of AI scientists concerning climate warming mitigation actions are disentangled from responsibilities of other involved actors. Third, to implement these responsibilities nudging interventions are suggested, leveraging on AI competitive games which would prize research combining better system accuracy with greater computational and energy efficiency. Finally, in addition to the AI carbon footprint, it is argued that another ethical issue with a genuinely global dimension is now emerging in the AI ethics agenda. This issue concerns the threats that AI-powered cyberweapons pose to the digital command, control, and communication infrastructure of nuclear weapons systems…(More)”.

Are we witnessing the dawn of post-theory science?

Essay by Laura Spinney: “Does the advent of machine learning mean the classic methodology of hypothesise, predict and test has had its day?..

Isaac Newton apocryphally discovered his second law – the one about gravity – after an apple fell on his head. Much experimentation and data analysis later, he realised there was a fundamental relationship between force, mass and acceleration. He formulated a theory to describe that relationship – one that could be expressed as an equation, F=ma – and used it to predict the behaviour of objects other than apples. His predictions turned out to be right (if not always precise enough for those who came later).

Contrast how science is increasingly done today. Facebook’s machine learning tools predict your preferences better than any psychologist. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.

You can’t lift a curtain and peer into the mechanism. They offer up no explanation, no set of rules for converting this into that – no theory, in a word. They just work and do so well. We witness the social effects of Facebook’s predictions daily. AlphaFold has yet to make its impact felt, but many are convinced it will change medicine.

Somewhere between Newton and Mark Zuckerberg, theory took a back seat. In 2008, Chris Anderson, the then editor-in-chief of Wired magazine, predicted its demise. So much data had accumulated, he argued, and computers were already so much better than us at finding relationships within it, that our theories were being exposed for what they were – oversimplifications of reality. Soon, the old scientific method – hypothesise, predict, test – would be relegated to the dustbin of history. We’d stop looking for the causes of things and be satisfied with correlations.

With the benefit of hindsight, we can say that what Anderson saw is true (he wasn’t alone). The complexity that this wealth of data has revealed to us cannot be captured by theory as traditionally understood. “We have leapfrogged over our ability to even write the theories that are going to be useful for description,” says computational neuroscientist Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Tübingen, Germany. “We don’t even know what they would look like.”

But Anderson’s prediction of the end of theory looks to have been premature – or maybe his thesis was itself an oversimplification. There are several reasons why theory refuses to die, despite the successes of such theory-free prediction engines as Facebook and AlphaFold. All are illuminating, because they force us to ask: what’s the best way to acquire knowledge and where does science go from here?…(More)”

Our Common AI Future – A Geopolitical Analysis and Road Map, for AI Driven Sustainable Development, Science and Data Diplomacy

(Open Access) Book by Francesco Lapenta: “The premise of this concise but thorough book is that the future, while uncertain and open, is not arbitrary, but the result of a complex series of competing decisions, actors, and events that began in the past, have reached a certain configuration in the present, and will continue to develop into the future. These past and present conditions constitute the basis and origin of future developments that have the potential to shape into a variety of different possible, probable, undesirable or desirable future scenarios. The realisation that these future scenarios cannot be totally arbitrary gives scope to the study of the past, indispensable to fully understand the facts and actors and forces that contributed to the formation of the present, and how certain systems, or dominant models, came to be established (I). The relative openness of future scenarios gives scope to the study of what competing forces and models might exist, their early formation, actors, and initiatives (II) and how they may act as catalysts for alternative theories, models (III and IV) and actions that can influence our future and change its path (V)…

The analyses in the book, which are loosely divided into three phases, move from the past to the present, and begin with identifying best practices and some of the key initiatives that have attempted to achieve these global collaborative goals over the last few decades. Then, moving forward, they describe a roadmap to a possible future based on already existing and developing theories, initiatives, and tools that could underpin these global collaborative efforts in the specific areas of AI and Sustainable Development. In the Road Map for AI Driven Sustainable Development, the analyses identify and stand on the shoulders of a number of past and current global initiatives that have worked for decades to lay the groundwork for this alternative evolutionary and collaborative model. The title of this book directs, acknowledges, and encourages readers to engage with one of these pivotal efforts, the “Our Common Future” report, the Brundtland’s commission report which was published in 1987 by the World Commission on Environment and Development (WCED). Building on the report’s ambitious humanistic and socioeconomic landscape and ambitions, the analyses investigate a variety of existing and developing best practices that could lead to, or inspire, a shared scientific collaborative model for AI development. Based on the understanding that, despite political rivalry and competition, governments should collaborate on at least two fundamental issues: One, to establish a set of global “Red Lines” to prohibit the development and use of AIs in specific applications that might pose an ethical or existential threat to humanity and the planet. And two, create a set of “Green Zones” for scientific diplomacy and cooperation in order to capitalize on the opportunities that the impending AIs era may represent in confronting major collective challenges such as the health and climate crises, the energy crisis, and the sustainable development goals identified in the report and developed by other subsequent global initiatives…(More)”.