How AI Could Revolutionize Diplomacy


Article by Andrew Moore: “More than a year into Russia’s war of aggression against Ukraine, there are few signs the conflict will end anytime soon. Ukraine’s success on the battlefield has been powered by the innovative use of new technologies, from aerial drones to open-source artificial intelligence (AI) systems. Yet ultimately, the war in Ukraine—like any other war—will end with negotiations. And although the conflict has spurred new approaches to warfare, diplomatic methods remain stuck in the 19th century.

Yet not even diplomacy—one of the world’s oldest professions—can resist the tide of innovation. New approaches could come from global movements, such as the Peace Treaty Initiative, to reimagine incentives to peacemaking. But much of the change will come from adopting and adapting new technologies.

With advances in areas such as artificial intelligence, quantum computing, the internet of things, and distributed ledger technology, today’s emerging technologies will offer new tools and techniques for peacemaking that could impact every step of the process—from the earliest days of negotiations all the way to monitoring and enforcing agreements…(More)”.

Eye of the Beholder: Defining AI Bias Depends on Your Perspective


Article by Mike Barlow: “…Today’s conversations about AI bias tend to focus on high-visibility social issues such as racism, sexism, ageism, homophobia, transphobia, xenophobia, and economic inequality. But there are dozens and dozens of known biases (e.g., confirmation bias, hindsight bias, availability bias, anchoring bias, selection bias, loss aversion bias, outlier bias, survivorship bias, omitted variable bias and many, many others). Jeff Desjardins, founder and editor-in-chief at Visual Capitalist, has published a fascinating infographic depicting 188 cognitive biases–and those are just the ones we know about.

Ana Chubinidze, founder of AdalanAI, a Berlin-based AI governance startup, worries that AIs will develop their own invisible biases. Currently, the term “AI bias” refers mostly to human biases that are embedded in historical data. “Things will become more difficult when AIs begin creating their own biases,” she says.

She foresees that AIs will find correlations in data and assume they are causal relationships—even if those relationships don’t exist in reality. Imagine, she says, an edtech system with an AI that poses increasingly difficult questions to students based on their ability to answer previous questions correctly. The AI would quickly develop a bias about which students are “smart” and which aren’t, even though we all know that answering questions correctly can depend on many factors, including hunger, fatigue, distraction, and anxiety. 

Nevertheless, the edtech AI’s “smarter” students would get challenging questions and the rest would get easier questions, resulting in unequal learning outcomes that might not be noticed until the semester is over—or might not be noticed at all. Worse yet, the AI’s bias would likely find its way into the system’s database and follow the students from one class to the next…

As we apply AI more widely and grapple with its implications, it becomes clear that bias itself is a slippery and imprecise term, especially when it is conflated with the idea of unfairness. Just because a solution to a particular problem appears “unbiased” doesn’t mean that it’s fair, and vice versa. 

“There is really no mathematical definition for fairness,” Stoyanovich says. “Things that we talk about in general may or may not apply in practice. Any definitions of bias and fairness should be grounded in a particular domain. You have to ask, ‘Whom does the AI impact? What are the harms and who is harmed? What are the benefits and who benefits?’”…(More)”.

AI Ethics


Textbook by Paula Boddington: “This book introduces readers to critical ethical concerns in the development and use of artificial intelligence. Offering clear and accessible information on central concepts and debates in AI ethics, it explores how related problems are now forcing us to address fundamental, age-old questions about human life, value, and meaning. In addition, the book shows how foundational and theoretical issues relate to concrete controversies, with an emphasis on understanding how ethical questions play out in practice.

All topics are explored in depth, with clear explanations of relevant debates in ethics and philosophy, drawing on both historical and current sources. Questions in AI ethics are explored in the context of related issues in technology, regulation, society, religion, and culture, to help readers gain a nuanced understanding of the scope of AI ethics within broader debates and concerns…(More)”

The Moral Economy of High-Tech Modernism


Essay by Henry Farrell and Marion Fourcade: “While people in and around the tech industry debate whether algorithms are political at all, social scientists take the politics as a given, asking instead how this politics unfolds: how algorithms concretely govern. What we call “high-tech modernism”—the application of machine learning algorithms to organize our social, economic, and political life—has a dual logic. On the one hand, like traditional bureaucracy, it is an engine of classification, even if it categorizes people and things very differently. On the other, like the market, it provides a means of self-adjusting allocation, though its feedback loops work differently from the price system. Perhaps the most important consequence of high-tech modernism for the contemporary moral political economy is how it weaves hierarchy and data-gathering into the warp and woof of everyday life, replacing visible feedback loops with invisible ones, and suggesting that highly mediated outcomes are in fact the unmediated expression of people’s own true wishes…(More)”.

Law, AI, and Human Rights


Article by John Croker: “Technology has been at the heart of two injustices that courts have labelled significant miscarriages of justice. The first example will be familiar now to many people in the UK: colloquially known as the ‘post office’ or ‘horizon’ scandal. The second is from Australia, where the Commonwealth Government sought to utilise AI to identify overpayment in the welfare system through what is colloquially known as the ‘Robodebt System’. The first example resulted in the most widespread miscarriage of justice in the UK legal system’s history. The second example was labelled “a shameful chapter” in government administration in Australia and led to the government unlawfully asserting debts amounting to $1.763 billion against 433,000 Australians, and is now the subject of a Royal Commission seeking to identify how public policy failures could have been made on such a significant scale.

Both examples show that where technology and AI goes wrong, the scale of the injustice can result in unprecedented impacts across societies….(More)”.

Automating Public Services: Learning from Cancelled Systems


Report by Joanna Redden, Jessica Brand, Ina Sander and Harry Warne: “Pressure on public finances means that governments are trying to do more with less. Increasingly, policymakers are turning to technology to cut costs. But what if this technology doesn’t work as it should?

This report looks at the rise and fall of automated decision systems (ADS). If you’ve tried to get medical advice over the phone recently you’ve got some experience of an ADS – a computer system or algorithm designed to help or replace human decision making. These sorts of systems are being used by governments to consider when and how to act. The stakes are high. For example, they’re being used to try to detect crime and spot fraud, and to determine whether child protective services should act.

This study identifies 61 occasions across Australia, Canada, Europe, New Zealand and the United States when ADS projects were cancelled or paused. From this evidence, we’ve made recommendations designed to increase transparency and to protect communities and individuals…(More)”.

Building Trust in AI: A Landscape Analysis of Government AI Programs


Paper by Susan Ariel Aaronson: “As countries around the world expand their use of artificial intelligence (AI), the Organisation for Economic Co-operation and Development (OECD) has developed the most comprehensive website on AI policy, the OECD.AI Policy Observatory. Although the website covers public policies on AI, the author of this paper found that many governments failed to evaluate or report on their AI initiatives. This lack of reporting is a missed opportunity for policy makers to learn from their programs (the author found that less than one percent of the programs listed on the OECD.AI website had been evaluated). In addition, the author found discrepancies between what governments said they were doing on the OECD.AI website and what they reported on their own websites. In some cases, there was no evidence of government actions; in other cases, links to government sites did not work. Evaluations of AI policies are important because they help governments demonstrate how they are building trust in both AI and AI governance and that policy makers are accountable to their fellow citizens…(More)”.

The Right To Be Free From Automation


Essay by Ziyaad Bhorat: “Is it possible to free ourselves from automation? The idea sounds fanciful, if not outright absurd. Industrial and technological development have reached a planetary level, and automation, as the general substitution or augmentation of human work with artificial tools capable of completing tasks on their own, is the bedrock of all the technologies designed to save, assist and connect us. 

From industrial lathes to OpenAI’s ChatGPT, automation is one of the most groundbreaking achievements in the history of humanity. As a consequence of the human ingenuity and imagination involved in automating our tools, the sky is quite literally no longer a limit. 

But in thinking about our relationship to automation in contemporary life, my unease has grown. And I’m not alone — America’s Blueprint for an AI Bill of Rights and the European Union’s GDPR both express skepticism of automated tools and systems: The “use of technology, data and automated systems in ways that threaten the rights of the American public”; the “right not to be subject to a decision based solely on automated processing.” 

If we look a little deeper, we find this uneasy language in other places where people have been guarding three important abilities against automated technologies. Historically, we have found these abilities so important that we now include them in various contemporary rights frameworks: the right to work, the right to know and understand the source of the things we consume, and the right to make our own decisions. Whether we like it or not, therefore, communities and individuals are already asserting the importance of protecting people from the ubiquity of automated tools and systems.

Consider the case of one of South Africa’s largest retailers, Pick n Pay, which in 2016 tried to introduce self-checkout technology in its retail stores. In post-Apartheid South Africa, trade unions are immensely powerful and unemployment persistently high, so any retail firm that wants to introduce technology that might affect the demand for labor faces huge challenges. After the country’s largest union federation threatened to boycott the new Pick n Pay machines, the company scrapped its pilot. 

As the sociologist Christopher Andrews writes in “The Overworked Consumer,” self-checkout technology is by no means a universally good thing. Firms that introduce it need to deal with new forms of theft, maintenance and bottleneck, while customers end up doing more work themselves. These issues are in addition to the ill fortunes of displaced workers…(More)”.

The Law of AI for Good


Paper by Orly Lobel: “Legal policy and scholarship are increasingly focused on regulating technology to safeguard against risks and harms, neglecting the ways in which the law should direct the use of new technology, and in particular artificial intelligence (AI), for positive purposes. This article pivots the debates about automation, finding that the focus on AI wrongs is descriptively inaccurate, undermining a balanced analysis of the benefits, potential, and risks involved in digital technology. Further, the focus on AI wrongs is normatively and prescriptively flawed, narrowing and distorting the law reforms currently dominating tech policy debates. The law-of-AI-wrongs focuses on reactive and defensive solutions to potential problems while obscuring the need to proactively direct and govern increasingly automated and datafied markets and societies. Analyzing a new Federal Trade Commission (FTC) report, the Biden administration’s 2022 AI Bill of Rights and American and European legislative reform efforts, including the Algorithmic Accountability Act of 2022, the Data Privacy and Protection Act of 2022, the European General Data Protection Regulation (GDPR) and the new draft EU AI Act, the article finds that governments are developing regulatory strategies that almost exclusively address the risks of AI while paying short shrift to its benefits. The policy focus on risks of digital technology is pervaded by logical fallacies and faulty assumptions, failing to evaluate AI in comparison to human decision-making and the status quo. The article presents a shift from the prevailing absolutist approach to one of comparative cost-benefit. The role of public policy should be to oversee digital advancements, verify capabilities, and scale and build public trust in the most promising technologies.

A more balanced regulatory approach to AI also illuminates tensions between current AI policies. Because AI requires better, more representative data, the right to privacy can conflict with the right to fair, unbiased, and accurate algorithmic decision-making. This article argues that the dominant policy frameworks regulating AI risks—emphasizing the right to human decision-making (human-in-the-loop) and the right to privacy (data minimization)—must be complemented with new corollary rights and duties: a right to automated decision-making (human-out-of-the-loop) and a right to complete and connected datasets (data maximization). Moreover, a shift to proactive governance of AI reveals the necessity for behavioral research on how to establish not only trustworthy AI, but also human rationality and trust in AI. Ironically, many of the legal protections currently proposed conflict with existing behavioral insights on human-machine trust. The article presents a blueprint for policymakers to engage in the deliberate study of how irrational aversion to automation can be mitigated through education, private-public governance, and smart policy design…(More)”

Machine Learning as a Tool for Hypothesis Generation


Paper by Jens Ludwig & Sendhil Mullainathan: “While hypothesis testing is a highly formalized activity, hypothesis generation remains largely informal. We propose a systematic procedure to generate novel hypotheses about human behavior, which uses the capacity of machine learning algorithms to notice patterns people might not. We illustrate the procedure with a concrete application: judge decisions about who to jail. We begin with a striking fact: The defendant’s face alone matters greatly for the judge’s jailing decision. In fact, an algorithm given only the pixels in the defendant’s mugshot accounts for up to half of the predictable variation. We develop a procedure that allows human subjects to interact with this black-box algorithm to produce hypotheses about what in the face influences judge decisions. The procedure generates hypotheses that are both interpretable and novel: They are not explained by demographics (e.g. race) or existing psychology research; nor are they already known (even if tacitly) to people or even experts. Though these results are specific, our procedure is general. It provides a way to produce novel, interpretable hypotheses from any high-dimensional dataset (e.g. cell phones, satellites, online behavior, news headlines, corporate filings, and high-frequency time series). A central tenet of our paper is that hypothesis generation is in and of itself a valuable activity, and hope this encourages future work in this largely “pre-scientific” stage of science…(More)”.