Artificial Intelligence Needs Both Pragmatists and Blue-Sky Visionaries


Essay by Ben Shneiderman: “Artificial intelligence thinkers seem to emerge from two communities. One is what I call blue-sky visionaries who speculate about the future possibilities of the technology, invoking utopian fantasies to generate excitement. Blue-sky ideas are compelling but are often clouded over by unrealistic visions and the ethical challenges of what can and should be built.

In contrast, what I call muddy-boots pragmatists are problem- and solution-focused. They want to reduce the harms that widely used AI-infused systems can create. They focus on fixing biased and flawed systems, such as in facial recognition systems that often mistakenly identify people as criminals or violate privacy. The pragmatists want to reduce deadly medical mistakes that AI can make, and steer self-driving cars to be safe-driving cars. Their goal is also to improve AI-based decisions about mortgage loans, college admissions, job hiring and parole granting.

As a computer science professor with a long history of designing innovative applications that have been widely implemented, I believe that the blue-sky visionaries would benefit by taking the thoughtful messages of the muddy-boots realists. Combining the work of both camps is more likely to produce the beneficial outcomes that will lead to successful next-generation technologies.

While the futuristic thinking of the blue-sky speculators sparks our awe and earns much of the funding, muddy-boots thinking reminds us that some AI applications threaten privacy, spread misinformation and are decidedly racistsexist and otherwise ethically dubious. Machines are undeniably part of our future, but will they serve all future humans equally? I think the caution and practicality of the muddy-boots camp will benefit humanity in the short and long run by ensuring diversity and equality in the development of the algorithms that increasingly run our day-to-day lives. If blue-sky thinkers integrate the concerns of muddy-boots realists into their designs, they can create future technologies that are more likely to advance human values, rights and dignity…(More)”.

Supporting peace negotiations in the Yemen war through machine learning


Paper by Miguel Arana-Catania, Felix-Anselm van Lier and Rob Procter: “Today’s conflicts are becoming increasingly complex, fluid, and fragmented, often involving a host of national and international actors with multiple and often divergent interests. This development poses significant challenges for conflict mediation, as mediators struggle to make sense of conflict dynamics, such as the range of conflict parties and the evolution of their political positions, the distinction between relevant and less relevant actors in peace-making, or the identification of key conflict issues and their interdependence. International peace efforts appear ill-equipped to successfully address these challenges. While technology is already being experimented with and used in a range of conflict related fields, such as conflict predicting or information gathering, less attention has been given to how technology can contribute to conflict mediation. This case study contributes to emerging research on the use of state-of-the-art machine learning technologies and techniques in conflict mediation processes. Using dialogue transcripts from peace negotiations in Yemen, this study shows how machine-learning can effectively support mediating teams by providing them with tools for knowledge management, extraction and conflict analysis. Apart from illustrating the potential of machine learning tools in conflict mediation, the article also emphasizes the importance of interdisciplinary and participatory, cocreation methodology for the development of context-sensitive and targeted tools and to ensure meaningful and responsible implementation…(More)”.

Superhuman science: How artificial intelligence may impact innovation


Working paper by Ajay Agrawal, John McHale, and Alexander Oettl: “New product innovation in fields like drug discovery and material science can be characterized as combinatorial search over a vast range of possibilities. Modeling innovation as a costly multi-stage search process, we explore how improvements in Artificial Intelligence (AI) could affect the productivity of the discovery pipeline in allowing improved prioritization of innovations that flow through that pipeline. We show how AI-aided prediction can increase the expected value of innovation and can increase or decrease the demand for downstream testing, depending on the type of innovation, and examine how AI can reduce costs associated with well-defined bottlenecks in the discovery pipeline. Finally, we discuss the critical role that policy can play to mitigate potential market failures associated with access to and provision of data as well as the provision of training necessary to more closely approach the socially optimal level of productivity enhancing innovations enabled by this technology…(More)”.

Can Artificial Intelligence Improve Gender Equality? Evidence from a Natural Experiment


Paper by Zhengyang Bao and Difang Huang: “Difang HuangGender stereotypes and discriminatory practices in the education system are important reasons for women’s under-representation in many fields. How to create a gender-neutral learning environment when teachers’ gender composition and mindset are slow to change? Artificial intelligence (AI)’s recent development provides a way to achieve this goal. Engineers can make AI trainers appear gender neutral and not take gender-related information as input. We use data from a natural experiment where AI trainers replace some human teachers for a male-dominated strategic board game to test the effectiveness of such AI training. The introduction of AI improves boys’ and girls’ performance faster and reduces the pre-existing gender gap. Class recordings suggest that AI trainers’ gender-neutral emotional status can partly explain the improvement in gender quality. We provide the first evidence demonstrating AI’s potential to promote equality for society…(More)”.

What does AI Localism look like in action? A new series examining use cases on how cities govern AI


Series by Uma Kalkar, Sara Marcucci, Salwa Mansuri, and Stefaan Verhulst: “…We call local instances of AI governance ‘AI Localism.’ AI Localism refers to the governance actions—which include, but are not limited to, regulations, legislations, task forces, public committees, and locally-developed tools—taken by local decision-makers to address the use of AI within a city or regional state.

It is necessary to note, however, that the presence of AI Localism does not mean that robust national- and state-level AI policy are not needed. Whereas local governance seems fundamental in addressing local, micro-level issues, tailoring, for instance, by implementing policies for specific AI use circumstances, national AI governance should act as a key tool to complement local efforts and provide cities with a cohesive, guiding direction.

Finally, it is important to mention how AI Localism is not necessarily good governance of AI at the local level. Indeed, there have been several instances where local efforts to regulate and employ AI have encroached on public freedoms and hurt the public good….

Examining the current state of play in AI localism

To this end, The Governance Lab (The GovLab) has created the AI Localism project to collect a knowledge base and inform a taxonomy on the dimensions of local AI governance (see below). This initiative began in 2020 with the AI Localism canvas, which captures the frames under which local governance methods are developing. This series presents current examples of AI localism across the seven canvas frames of: 

  • Principles and Rights: foundational requirements and constraints of AI and algorithmic use in the public sector;
  • Laws and Policies: regulation to codify the above for public and private sectors;
  • Procurement: mandates around the use of AI in employment and hiring practices; 
  • Engagement: public involvement in AI use and limitations;
  • Accountability and Oversight: requirements for periodic reporting and auditing of AI use;
  • Transparency: consumer awareness about AI and algorithm use; and
  • Literacy: avenues to educate policymakers and the public about AI and data.

In this eight-part series, released weekly, we will present current examples of each frame of the AI localism canvas to identify themes among city- and state-led legislative actions. We end with ten lessons on AI localism for policymakers, data and AI experts, and the informed public to keep in mind as cities grow increasingly ‘smarter.’…(More)”.

AI Ethics: Global Perspectives


The AI Ethics: Global Perspectives course released three new video modules this week.

  • In “AI Ethics and Hate Speech”, Maha Jouini from the African Center for Artificial Intelligence and Digital Technology, explores the intersection between AI and hate speech in the context of the MENA region.
  • Maxime Ducret at the University of Lyon and Carl Mörch from the FARI AI Institute for the Common Good introduce the ethical implications of the use of AI technologies in the field of dentistry in their module “How Your Teeth, Your Smile and AI Ethics are Related”.
  • And finally, in “Ethics in AI for Peace”, AI for Peace’s Branka Panic talks about how the “algo age” brought with it many technical, legal, and ethical questions that exceeded the scope of existing peacebuilding and peacetech ethics frameworks.

To watch these lectures in full and register for the course, visit our website

The Low Threshold for Face Recognition in New Delhi


Article by Varsha Bansal: “Indian law enforcement is starting to place huge importance on facial recognition technology. Delhi police, looking into identifying people involved in civil unrest in northern India in the past few years, said that they would consider 80 percent accuracy and above as a “positive” match, according to documents obtained by the Internet Freedom Foundation through a public records request.

Facial recognition’s arrival in India’s capital region marks the expansion of Indian law enforcement officials using facial recognition data as evidence for potential prosecution, ringing alarm bells among privacy and civil liberties experts. There are also concerns about the 80 percent accuracy threshold, which critics say is arbitrary and far too low, given the potential consequences for those marked as a match. India’s lack of a comprehensive data protection law makes matters even more concerning.

The documents further state that even if a match is under 80 percent, it would be considered a “false positive” rather than a negative, which would make that individual “subject to due verification with other corroborative evidence.”

“This means that even though facial recognition is not giving them the result that they themselves have decided is the threshold, they will continue to investigate,” says Anushka Jain, associate policy counsel for surveillance and technology with the IFF, who filed for this information. “This could lead to harassment of the individual just because the technology is saying that they look similar to the person the police are looking for.” She added that this move by the Delhi Police could also result in harassment of people from communities that have been historically targeted by law enforcement officials…(More)”

Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous


Essay by Henry Farrell, Abraham Newman, and Jeremy Wallace: “In policy circles, discussions about artificial intelligence invariably pit China against the United States in a race for technological supremacy. If the key resource is data, then China, with its billion-plus citizens and lax protections against state surveillance, seems destined to win. Kai-Fu Lee, a famous computer scientist, has claimed that data is the new oil, and China the new OPEC. If superior technology is what provides the edge, however, then the United States, with its world class university system and talented workforce, still has a chance to come out ahead. For either country, pundits assume that superiority in AI will lead naturally to broader economic and military superiority.

But thinking about AI in terms of a race for dominance misses the more fundamental ways in which AI is transforming global politics. AI will not transform the rivalry between powers so much as it will transform the rivals themselves. The United States is a democracy, whereas China is an authoritarian regime, and machine learning challenges each political system in its own way. The challenges to democracies such as the United States are all too visible. Machine learning may increase polarization—reengineering the online world to promote political division. It will certainly increase disinformation in the future, generating convincing fake speech at scale. The challenges to autocracies are more subtle but possibly more corrosive. Just as machine learning reflects and reinforces the divisions of democracy, it may confound autocracies, creating a false appearance of consensus and concealing underlying societal fissures until it is too late.

Early pioneers of AI, including the political scientist Herbert Simon, realized that AI technology has more in common with markets, bureaucracies, and political institutions than with simple engineering applications. Another pioneer of artificial intelligence, Norbert Wiener, described AI as a “cybernetic” system—one that can respond and adapt to feedback. Neither Simon nor Wiener anticipated how machine learning would dominate AI, but its evolution fits with their way of thinking. Facebook and Google use machine learning as the analytic engine of a self-correcting system, which continually updates its understanding of the data depending on whether its predictions succeed or fail. It is this loop between statistical analysis and feedback from the environment that has made machine learning such a formidable force…(More)”

Voices in the Code: A Story about People, Their Values, and the Algorithm They Made


Book by David G. Robinson: “Algorithms–rules written into software–shape key moments in our lives: from who gets hired or admitted to a top public school, to who should go to jail or receive scarce public benefits. Today, high stakes software is rarely open to scrutiny, but its code navigates moral questions: Which of a person’s traits are fair to consider as part of a job application? Who deserves priority in accessing scarce public resources, whether those are school seats, housing, or medicine? When someone first appears in a courtroom, how should their freedom be weighed against the risks they might pose to others?

Policymakers and the public often find algorithms to be complex, opaque and intimidating—and it can be tempting to pretend that hard moral questions have simple technological answers. But that approach leaves technical experts holding the moral microphone, and it stops people who lack technical expertise from making their voices heard. Today, policymakers and scholars are seeking better ways to share the moral decisionmaking within high stakes software — exploring ideas like public participation, transparency, forecasting, and algorithmic audits. But there are few real examples of those techniques in use.

In Voices in the Code, scholar David G. Robinson tells the story of how one community built a life-and-death algorithm in a relatively inclusive, accountable way. Between 2004 and 2014, a diverse group of patients, surgeons, clinicians, data scientists, public officials and advocates collaborated and compromised to build a new transplant matching algorithm – a system to offer donated kidneys to particular patients from the U.S. national waiting list…(More)”.

China May Be Chasing Impossible Dream by Trying to Harness Internet Algorithms


Article by Karen Hao: “China’s powerful cyberspace regulator has taken the first step in a pioneering—and uncertain—government effort to rein in the automated systems that shape the internet.

Earlier this month, the Cyberspace Administration of China published summaries of 30 core algorithms belonging to two dozen of the country’s most influential internet companies, including TikTok owner ByteDance Ltd., e-commerce behemoth Alibaba Group Holding Ltd. and Tencent Holdings Ltd., owner of China’s ubiquitous WeChat super app.

The milestone marks the first systematic effort by a regulator to compel internet companies to reveal information about the technologies powering their platforms, which have shown the capacity to radically alter everything from pop culture to politics. It also puts Beijing on a path that some technology experts say few governments, if any, are equipped to handle….

One important question the effort raises, algorithm experts say, is whether direct government regulation of algorithms is practically possible.

The majority of today’s internet platform algorithms are based on a technology called machine learning, which automates decisions such as ad-targeting by learning to predict user behaviors from vast repositories of data. Unlike traditional algorithms that contain explicit rules coded by engineers, most machine-learning systems are black boxes, making it hard to decipher their logic or anticipate the consequences of their use.

Beijing’s interest in regulating algorithms started in 2020, after TikTok sought an American buyer to avoid being banned in the U.S., according to people familiar with the government’s thinking. When several bidders for the short-video platform lost interest after Chinese regulators announced new export controls on information-recommendation technology, it tipped off Beijing to the importance of algorithms, the people said…(More)”.