Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous


Essay by Henry Farrell, Abraham Newman, and Jeremy Wallace: “In policy circles, discussions about artificial intelligence invariably pit China against the United States in a race for technological supremacy. If the key resource is data, then China, with its billion-plus citizens and lax protections against state surveillance, seems destined to win. Kai-Fu Lee, a famous computer scientist, has claimed that data is the new oil, and China the new OPEC. If superior technology is what provides the edge, however, then the United States, with its world class university system and talented workforce, still has a chance to come out ahead. For either country, pundits assume that superiority in AI will lead naturally to broader economic and military superiority.

But thinking about AI in terms of a race for dominance misses the more fundamental ways in which AI is transforming global politics. AI will not transform the rivalry between powers so much as it will transform the rivals themselves. The United States is a democracy, whereas China is an authoritarian regime, and machine learning challenges each political system in its own way. The challenges to democracies such as the United States are all too visible. Machine learning may increase polarization—reengineering the online world to promote political division. It will certainly increase disinformation in the future, generating convincing fake speech at scale. The challenges to autocracies are more subtle but possibly more corrosive. Just as machine learning reflects and reinforces the divisions of democracy, it may confound autocracies, creating a false appearance of consensus and concealing underlying societal fissures until it is too late.

Early pioneers of AI, including the political scientist Herbert Simon, realized that AI technology has more in common with markets, bureaucracies, and political institutions than with simple engineering applications. Another pioneer of artificial intelligence, Norbert Wiener, described AI as a “cybernetic” system—one that can respond and adapt to feedback. Neither Simon nor Wiener anticipated how machine learning would dominate AI, but its evolution fits with their way of thinking. Facebook and Google use machine learning as the analytic engine of a self-correcting system, which continually updates its understanding of the data depending on whether its predictions succeed or fail. It is this loop between statistical analysis and feedback from the environment that has made machine learning such a formidable force…(More)”

Voices in the Code: A Story about People, Their Values, and the Algorithm They Made


Book by David G. Robinson: “Algorithms–rules written into software–shape key moments in our lives: from who gets hired or admitted to a top public school, to who should go to jail or receive scarce public benefits. Today, high stakes software is rarely open to scrutiny, but its code navigates moral questions: Which of a person’s traits are fair to consider as part of a job application? Who deserves priority in accessing scarce public resources, whether those are school seats, housing, or medicine? When someone first appears in a courtroom, how should their freedom be weighed against the risks they might pose to others?

Policymakers and the public often find algorithms to be complex, opaque and intimidating—and it can be tempting to pretend that hard moral questions have simple technological answers. But that approach leaves technical experts holding the moral microphone, and it stops people who lack technical expertise from making their voices heard. Today, policymakers and scholars are seeking better ways to share the moral decisionmaking within high stakes software — exploring ideas like public participation, transparency, forecasting, and algorithmic audits. But there are few real examples of those techniques in use.

In Voices in the Code, scholar David G. Robinson tells the story of how one community built a life-and-death algorithm in a relatively inclusive, accountable way. Between 2004 and 2014, a diverse group of patients, surgeons, clinicians, data scientists, public officials and advocates collaborated and compromised to build a new transplant matching algorithm – a system to offer donated kidneys to particular patients from the U.S. national waiting list…(More)”.

China May Be Chasing Impossible Dream by Trying to Harness Internet Algorithms


Article by Karen Hao: “China’s powerful cyberspace regulator has taken the first step in a pioneering—and uncertain—government effort to rein in the automated systems that shape the internet.

Earlier this month, the Cyberspace Administration of China published summaries of 30 core algorithms belonging to two dozen of the country’s most influential internet companies, including TikTok owner ByteDance Ltd., e-commerce behemoth Alibaba Group Holding Ltd. and Tencent Holdings Ltd., owner of China’s ubiquitous WeChat super app.

The milestone marks the first systematic effort by a regulator to compel internet companies to reveal information about the technologies powering their platforms, which have shown the capacity to radically alter everything from pop culture to politics. It also puts Beijing on a path that some technology experts say few governments, if any, are equipped to handle….

One important question the effort raises, algorithm experts say, is whether direct government regulation of algorithms is practically possible.

The majority of today’s internet platform algorithms are based on a technology called machine learning, which automates decisions such as ad-targeting by learning to predict user behaviors from vast repositories of data. Unlike traditional algorithms that contain explicit rules coded by engineers, most machine-learning systems are black boxes, making it hard to decipher their logic or anticipate the consequences of their use.

Beijing’s interest in regulating algorithms started in 2020, after TikTok sought an American buyer to avoid being banned in the U.S., according to people familiar with the government’s thinking. When several bidders for the short-video platform lost interest after Chinese regulators announced new export controls on information-recommendation technology, it tipped off Beijing to the importance of algorithms, the people said…(More)”.

Public preferences for governing AI technology: Comparative evidence


Paper by Soenke Ehret: “Citizens’ attitudes concerning aspects of AI such as transparency, privacy, and discrimination have received considerable attention. However, it is an open question to what extent economic consequences affect preferences for public policies governing AI. When does the public demand imposing restrictions on – or even prohibiting – emerging AI technologies? Do average citizens’ preferences depend causally on normative and economic concerns or only on one of these causes? If both, how might economic risks and opportunities interact with assessments based on normative factors? And to what extent does the balance between the two kinds of concerns vary by context? I answer these questions using a comparative conjoint survey experiment conducted in Germany, the United Kingdom, India, Chile, and China. The data analysis suggests strong effects regarding AI systems’ economic and normative attributes. Moreover, I find considerable cross-country variation in normative preferences regarding the prohibition of AI systems vis-a-vis economic concerns…(More)”.

Artificial intelligence was supposed to transform health care. It hasn’t.


Article by Ben Leonard and Ruth Reader: “Artificial intelligence is spreading into health care, often as software or a computer program capable of learning from large amounts of data and making predictions to guide care or help patients. | Seth Wenig/AP Photo

Investors see health care’s future as inextricably linked with artificial intelligence. That’s obvious from the cash pouring into AI-enabled digital health startups, including more than $3 billion in the first half of 2022 alone and nearly $10 billion in 2021, according to a Rock Health investment analysis commissioned by POLITICO.

And no wonder, considering the bold predictions technologists have made. At a conference in 2016, Geoffrey Hinton, British cognitive psychologist and “godfather” of AI, said radiologists would soon go the way of typesetters and bank tellers: “People should stop training radiologists now. It’s just completely obvious that, within five years, deep learning is going to do better.”

But more than five years since Hinton’s forecast, radiologists are still training to read image scans. Instead of replacing doctors, health system administrators now see AI as a tool clinicians will use to improve everything from their diagnoses to billing practices. AI hasn’t lived up to the hype, medical experts said, because health systems’ infrastructure isn’t ready for it yet. And the government is just beginning to grapple with its regulatory role.

“Companies come in promising the world and often don’t deliver,” said Bob Wachter, head of the department of medicine at the University of California, San Francisco. “When I look for examples of … true AI and machine learning that’s really making a difference, they’re pretty few and far between. It’s pretty underwhelming.”

Administrators say algorithms — the software that processes data — from outside companies don’t always work as advertised because each health system has its own technological framework. So hospitals are building out engineering teams and developing artificial intelligence and other technology tailored to their own needs.

But it’s slow going. Research based on job postings shows health care behind every industry except construction in adopting AI…(More)”.

Designing Human-Centric AI Experiences


Book by Akshay Kore: “User experience (UX) design practices have seen a fundamental shift as more and more software products incorporate machine learning (ML) components and artificial intelligence (AI) algorithms at their core. This book will probe into UX design’s role in making technologies inclusive and enabling user collaboration with AI.  

AI/ML-based systems have changed the way of traditional UX design. Instead of programming a method to do a specific action, creators of these systems provide data and nurture them to curate outcomes based on inputs. These systems are dynamic and while AI systems change over time, their user experience, in many cases, does not adapt to this dynamic nature.  

Applied UX Design for Artificial Intelligence will explore this problem, addressing the challenges and opportunities in UX design for AI/ML systems, look at best practices for designers, managers, and product creators and showcase how individuals from a non-technical background can collaborate effectively with AI and Machine learning teams…(More)”.

Algorithms for Decision Making


Book by Mykel J. Kochenderfer, Tim A. Wheeler and Kyle H. Wray: “Automated decision-making systems or decision-support systems—used in applications that range from aircraft collision avoidance to breast cancer screening—must be designed to account for various sources of uncertainty while carefully balancing multiple objectives. This textbook provides a broad introduction to algorithms for decision making under uncertainty, covering the underlying mathematical problem formulations and the algorithms for solving them.

The book first addresses the problem of reasoning about uncertainty and objectives in simple decisions at a single point in time, and then turns to sequential decision problems in stochastic environments where the outcomes of our actions are uncertain. It goes on to address model uncertainty, when we do not start with a known model and must learn how to act through interaction with the environment; state uncertainty, in which we do not know the current state of the environment due to imperfect perceptual information; and decision contexts involving multiple agents. The book focuses primarily on planning and reinforcement learning, although some of the techniques presented draw on elements of supervised learning and optimization. Algorithms are implemented in the Julia programming language. Figures, examples, and exercises convey the intuition behind the various approaches presented…(More)”

AI-powered cameras to enforce bus lanes


Article by Chris Teale: “New York’s Metropolitan Transportation Authority will use an automated camera system to ensure bus lanes in New York City are free from illegally parked vehicles.

The MTA is partnering with Hayden AI to deploy Automated Bus Lane Enforcement camera systems to 300 buses, which will be mounted on the interior of the windshield and powered by artificial intelligence. The agency has the option to add the cameras to 200 more buses if it chooses.

Chris Carson, Hayden AI’s CEO and co-founder, said when the cameras detect an encroachment on a bus lane, they use real-time automated license plate recognition and edge computing to compile a packet of evidence that includes the time, date and location of the offense, as well as a brief video that shows the violator’s license plate. 

That information is encrypted and sent securely to the cloud, where MTA officials can access and analyze it for violations. If there is no encroachment on a bus lane, the cameras do not record anything…

An MTA spokesperson said the agency will also use data from the system to identify locations that have the highest instances of vehicles blocking bus lanes. New York City has 140 miles of bus lanes and has plans to build 150 more miles in the next four years, but congestion and lane violations from other road users slows the speed of the buses. The city already uses cameras and police patrols to attempt to enforce proper bus lane use…(More)”.

AI ethics: the case for including animals


Paper by Peter Singer & Yip Fai Tse: “The ethics of artificial intelligence, or AI ethics, is a rapidly growing field, and rightly so. While the range of issues and groups of stakeholders concerned by the field of AI ethics is expanding, with speculation about whether it extends even to the machines themselves, there is a group of sentient beings who are also affected by AI, but are rarely mentioned within the field of AI ethics—the nonhuman animals. This paper seeks to explore the kinds of impact AI has on nonhuman animals, the severity of these impacts, and their moral implications. We hope that this paper will facilitate the development of a new field of philosophical and technical research regarding the impacts of AI on animals, namely, the ethics of AI as it affects nonhuman animals…(More)”.

Towards Human-Centric Algorithmic Governance


Blog by Zeynep Engin: “It is no longer news to say that the capabilities afforded by Data Science, AI and their associated technologies (such as Digital Twins, Smart Cities, Ledger Systems and other platforms) are poised to revolutionise governance, radically transforming the way democratic processes work, citizen services are provided, and justice is delivered. Emerging applications range from the way election campaigns are run and how crises at population level are managed (e.g. pandemics) to everyday operations like simple parking enforcement and traffic management, and to decisions at critical individual junctures, such as hiring or sentencing decisions. What it means to be a ‘human’ is also a hot topic for both scholarly and everyday discussions, since our societal interactions and values are also shifting fast in an increasingly digital and data-driven world.

As a millennial who grew up in a ‘developing’ economy in the ’90s and later established a cross-sector career in a ‘developed’ economy in the fields of data for policy and algorithmic governance, I believe I can credibly claim a pertinent, hands-on experience of the transformation from a fully analogue world into a largely digital one. I started off trying hard to find sufficient printed information to refer to in my term papers at secondary school, gradually adapting to trying hard to extract useful information amongst practically unlimited resources available online today. The world has become a lot more connected: communities are formed online, goods and services customised to individual tastes and preferences, work and education are increasingly hybrid, reducing dependency on physical environment, geography and time zones. Despite all these developments in nearly every aspect of our lives, one thing that has persisted in the face of this change is the nature of collective decision-making, particularly at the civic/governmental level. It still comprises the same election cycles with more or less similar political incentives and working practices, and the same type of politicians, bureaucracies, hierarchies and networks making and executing important (and often suboptimal) decisions on behalf of the public. Unelected private sector stakeholders in the meantime are quick to fill the growing gap — they increasingly make policies that affect large populations and define the public discourse, to primarily maximise their profit behind their IP protection walls…(More)”.