Essay by Henry Farrell and Marion Fourcade: “While people in and around the tech industry debate whether algorithms are political at all, social scientists take the politics as a given, asking instead how this politics unfolds: how algorithms concretely govern. What we call “high-tech modernism”—the application of machine learning algorithms to organize our social, economic, and political life—has a dual logic. On the one hand, like traditional bureaucracy, it is an engine of classification, even if it categorizes people and things very differently. On the other, like the market, it provides a means of self-adjusting allocation, though its feedback loops work differently from the price system. Perhaps the most important consequence of high-tech modernism for the contemporary moral political economy is how it weaves hierarchy and data-gathering into the warp and woof of everyday life, replacing visible feedback loops with invisible ones, and suggesting that highly mediated outcomes are in fact the unmediated expression of people’s own true wishes…(More)”.
Law, AI, and Human Rights
Article by John Croker: “Technology has been at the heart of two injustices that courts have labelled significant miscarriages of justice. The first example will be familiar now to many people in the UK: colloquially known as the ‘post office’ or ‘horizon’ scandal. The second is from Australia, where the Commonwealth Government sought to utilise AI to identify overpayment in the welfare system through what is colloquially known as the ‘Robodebt System’. The first example resulted in the most widespread miscarriage of justice in the UK legal system’s history. The second example was labelled “a shameful chapter” in government administration in Australia and led to the government unlawfully asserting debts amounting to $1.763 billion against 433,000 Australians, and is now the subject of a Royal Commission seeking to identify how public policy failures could have been made on such a significant scale.
Both examples show that where technology and AI goes wrong, the scale of the injustice can result in unprecedented impacts across societies….(More)”.
Automating Public Services: Learning from Cancelled Systems
Report by Joanna Redden, Jessica Brand, Ina Sander and Harry Warne: “Pressure on public finances means that governments are trying to do more with less. Increasingly, policymakers are turning to technology to cut costs. But what if this technology doesn’t work as it should?
This report looks at the rise and fall of automated decision systems (ADS). If you’ve tried to get medical advice over the phone recently you’ve got some experience of an ADS – a computer system or algorithm designed to help or replace human decision making. These sorts of systems are being used by governments to consider when and how to act. The stakes are high. For example, they’re being used to try to detect crime and spot fraud, and to determine whether child protective services should act.
This study identifies 61 occasions across Australia, Canada, Europe, New Zealand and the United States when ADS projects were cancelled or paused. From this evidence, we’ve made recommendations designed to increase transparency and to protect communities and individuals…(More)”.
Building Trust in AI: A Landscape Analysis of Government AI Programs
Paper by Susan Ariel Aaronson: “As countries around the world expand their use of artificial intelligence (AI), the Organisation for Economic Co-operation and Development (OECD) has developed the most comprehensive website on AI policy, the OECD.AI Policy Observatory. Although the website covers public policies on AI, the author of this paper found that many governments failed to evaluate or report on their AI initiatives. This lack of reporting is a missed opportunity for policy makers to learn from their programs (the author found that less than one percent of the programs listed on the OECD.AI website had been evaluated). In addition, the author found discrepancies between what governments said they were doing on the OECD.AI website and what they reported on their own websites. In some cases, there was no evidence of government actions; in other cases, links to government sites did not work. Evaluations of AI policies are important because they help governments demonstrate how they are building trust in both AI and AI governance and that policy makers are accountable to their fellow citizens…(More)”.
The Right To Be Free From Automation
Essay by Ziyaad Bhorat: “Is it possible to free ourselves from automation? The idea sounds fanciful, if not outright absurd. Industrial and technological development have reached a planetary level, and automation, as the general substitution or augmentation of human work with artificial tools capable of completing tasks on their own, is the bedrock of all the technologies designed to save, assist and connect us.
From industrial lathes to OpenAI’s ChatGPT, automation is one of the most groundbreaking achievements in the history of humanity. As a consequence of the human ingenuity and imagination involved in automating our tools, the sky is quite literally no longer a limit.
But in thinking about our relationship to automation in contemporary life, my unease has grown. And I’m not alone — America’s Blueprint for an AI Bill of Rights and the European Union’s GDPR both express skepticism of automated tools and systems: The “use of technology, data and automated systems in ways that threaten the rights of the American public”; the “right not to be subject to a decision based solely on automated processing.”
If we look a little deeper, we find this uneasy language in other places where people have been guarding three important abilities against automated technologies. Historically, we have found these abilities so important that we now include them in various contemporary rights frameworks: the right to work, the right to know and understand the source of the things we consume, and the right to make our own decisions. Whether we like it or not, therefore, communities and individuals are already asserting the importance of protecting people from the ubiquity of automated tools and systems.
Consider the case of one of South Africa’s largest retailers, Pick n Pay, which in 2016 tried to introduce self-checkout technology in its retail stores. In post-Apartheid South Africa, trade unions are immensely powerful and unemployment persistently high, so any retail firm that wants to introduce technology that might affect the demand for labor faces huge challenges. After the country’s largest union federation threatened to boycott the new Pick n Pay machines, the company scrapped its pilot.
As the sociologist Christopher Andrews writes in “The Overworked Consumer,” self-checkout technology is by no means a universally good thing. Firms that introduce it need to deal with new forms of theft, maintenance and bottleneck, while customers end up doing more work themselves. These issues are in addition to the ill fortunes of displaced workers…(More)”.
The Law of AI for Good
Paper by Orly Lobel: “Legal policy and scholarship are increasingly focused on regulating technology to safeguard against risks and harms, neglecting the ways in which the law should direct the use of new technology, and in particular artificial intelligence (AI), for positive purposes. This article pivots the debates about automation, finding that the focus on AI wrongs is descriptively inaccurate, undermining a balanced analysis of the benefits, potential, and risks involved in digital technology. Further, the focus on AI wrongs is normatively and prescriptively flawed, narrowing and distorting the law reforms currently dominating tech policy debates. The law-of-AI-wrongs focuses on reactive and defensive solutions to potential problems while obscuring the need to proactively direct and govern increasingly automated and datafied markets and societies. Analyzing a new Federal Trade Commission (FTC) report, the Biden administration’s 2022 AI Bill of Rights and American and European legislative reform efforts, including the Algorithmic Accountability Act of 2022, the Data Privacy and Protection Act of 2022, the European General Data Protection Regulation (GDPR) and the new draft EU AI Act, the article finds that governments are developing regulatory strategies that almost exclusively address the risks of AI while paying short shrift to its benefits. The policy focus on risks of digital technology is pervaded by logical fallacies and faulty assumptions, failing to evaluate AI in comparison to human decision-making and the status quo. The article presents a shift from the prevailing absolutist approach to one of comparative cost-benefit. The role of public policy should be to oversee digital advancements, verify capabilities, and scale and build public trust in the most promising technologies.
A more balanced regulatory approach to AI also illuminates tensions between current AI policies. Because AI requires better, more representative data, the right to privacy can conflict with the right to fair, unbiased, and accurate algorithmic decision-making. This article argues that the dominant policy frameworks regulating AI risks—emphasizing the right to human decision-making (human-in-the-loop) and the right to privacy (data minimization)—must be complemented with new corollary rights and duties: a right to automated decision-making (human-out-of-the-loop) and a right to complete and connected datasets (data maximization). Moreover, a shift to proactive governance of AI reveals the necessity for behavioral research on how to establish not only trustworthy AI, but also human rationality and trust in AI. Ironically, many of the legal protections currently proposed conflict with existing behavioral insights on human-machine trust. The article presents a blueprint for policymakers to engage in the deliberate study of how irrational aversion to automation can be mitigated through education, private-public governance, and smart policy design…(More)”
Machine Learning as a Tool for Hypothesis Generation
Paper by Jens Ludwig & Sendhil Mullainathan: “While hypothesis testing is a highly formalized activity, hypothesis generation remains largely informal. We propose a systematic procedure to generate novel hypotheses about human behavior, which uses the capacity of machine learning algorithms to notice patterns people might not. We illustrate the procedure with a concrete application: judge decisions about who to jail. We begin with a striking fact: The defendant’s face alone matters greatly for the judge’s jailing decision. In fact, an algorithm given only the pixels in the defendant’s mugshot accounts for up to half of the predictable variation. We develop a procedure that allows human subjects to interact with this black-box algorithm to produce hypotheses about what in the face influences judge decisions. The procedure generates hypotheses that are both interpretable and novel: They are not explained by demographics (e.g. race) or existing psychology research; nor are they already known (even if tacitly) to people or even experts. Though these results are specific, our procedure is general. It provides a way to produce novel, interpretable hypotheses from any high-dimensional dataset (e.g. cell phones, satellites, online behavior, news headlines, corporate filings, and high-frequency time series). A central tenet of our paper is that hypothesis generation is in and of itself a valuable activity, and hope this encourages future work in this largely “pre-scientific” stage of science…(More)”.
Urban AI Guide
Guide by Popelka, S., Narvaez Zertuche, L., Beroche, H.: “The idea for this guide arose from conversations with city leaders, who were confronted with new technologies, like artificial intelligence, as a means of solving complex urban problems, but who felt they lacked the background knowledge to properly engage with and evaluate the solutions. In some instances, this knowledge gap produced a barrier to project implementation or led to unintended project outcomes.
The guide begins with a literature review, presenting the state of the art in research on urban artificial intelligence. It then diagrams and describes an “urban AI anatomy,” outlining and explaining the components that make up an urban AI system. Insights from experts in the Urban AI community enrich this section, illuminating considerations involved in each component. Finally, the guide concludes with an in-depth examination of three case studies: water meter lifecycle in Winnipeg, Canada, curb digitization and planning in Los Angeles, USA, and air quality monitoring in Vilnius, Lithuania. Collectively, the case studies highlight the diversity of ways in which artificial intelligence can be operationalized in urban contexts, as well as the steps and requirements necessary to implement an urban AI project.
Since the field of urban AI is constantly evolving, we anticipate updating the guide annually. Please consider filling out the contribution form, if you have an urban AI use case that has been operationalized. We may contact you to include the use case as a case study in a future edition of the guide.
As a continuation of the guide, we offer customized workshops on urban AI, oriented toward municipalities and other urban stakeholders, who are interested in learning more about how artificial intelligence interacts in urban environments. Please contact us if you would like more information on this program…(More)”.
The False Promise of ChatGPT
Article by Noam Chomsky: “…OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought. These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.
That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach…(More)”.
I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique
Book by Thomas Chamorro-Premuzic: “For readers of “Sapiens” and “Homo Deus” and viewers of “The Social Dilemma,” psychologist Tomas Chamorro-Premuzic tackles one of the biggest questions facing our species: Will we use artificial intelligence to improve the way we work and live, or will we allow it to alienate us? It’s no secret that AI is changing the way we live, work, love, and entertain ourselves. Dating apps are using AI to pick our potential partners. Retailers are using AI to predict our behavior and desires. Rogue actors are using AI to persuade us with bots and misinformation. Companies are using AI to hire us–or not. In “I, Human” psychologist Tomas Chamorro-Premuzic takes readers on an enthralling and eye-opening journey across the AI landscape. Though AI has the potential to change our lives for the better, he argues, AI is also worsening our bad tendencies, making us more distracted, selfish, biased, narcissistic, entitled, predictable, and impatient. It doesn’t have to be this way. Filled with fascinating insights about human behavior and our complicated relationship with technology, I, Human will help us stand out and thrive when many of our decisions are being made for us. To do so, we’ll need to double down on our curiosity, adaptability, and emotional intelligence while relying on the lost virtues of empathy, humility, and self-control. This is just the beginning. As AI becomes smarter and more humanlike, our societies, our economies, and our humanity will undergo the most dramatic changes we’ve seen since the Industrial Revolution. Some of these changes will enhance our species. Others may dehumanize us and make us more machinelike in our interactions with people. It’s up to us to adapt and determine how we want to live and work. The choice is ours. What will we decide?…(More)”.