How to stop our cities from being turned into AI jungles


Stefaan G. Verhulst at The Conversation: “As artificial intelligence grows more ubiquitous, its potential and the challenges it presents are coming increasingly into focus. How we balance the risks and opportunities is shaping up as one of the defining questions of our era. In much the same way that cities have emerged as hubs of innovation in culture, politics, and commerce, so they are defining the frontiers of AI governance.

Some examples of how cities have been taking the lead include the Cities Coalition for Digital Rights, the Montreal Declaration for Responsible AI, and the Open Dialogue on AI Ethics. Others can be found in San Francisco’s ban of facial-recognition technology, and New York City’s push for regulating the sale of automated hiring systems and creation of an algorithms management and policy officer. Urban institutes, universities and other educational centres have also been forging ahead with a range of AI ethics initiatives.

These efforts point to an emerging paradigm that has been referred to as AI Localism. It’s a part of a larger phenomenon often called New Localism, which involves cities taking the lead in regulation and policymaking to develop context-specific approaches to a variety of problems and challenges. We have also seen an increased uptake of city-centric approaches within international law frameworks

Below are ten principles to help systematise our approach to AI Localism. Considered together, they add up to an incipient framework for implementing and assessing initiatives around the world:…(More)”.

Working with AI: Real Stories of Human-Machine Collaboration


Book by Thomas H. Davenport and Steven M. Miller: “This book breaks through both the hype and the doom-and-gloom surrounding automation and the deployment of artificial intelligence-enabled—“smart”—systems at work. Management and technology experts Thomas Davenport and Steven Miller show that, contrary to widespread predictions, prescriptions, and denunciations, AI is not primarily a job destroyer. Rather, AI changes the way we work—by taking over some tasks but not entire jobs, freeing people to do other, more important and more challenging work. By offering detailed, real-world case studies of AI-augmented jobs in settings that range from finance to the factory floor, Davenport and Miller also show that AI in the workplace is not the stuff of futuristic speculation. It is happening now to many companies and workers.These cases include a digital system for life insurance underwriting that analyzes applications and third-party data in real time, allowing human underwriters to focus on more complex cases; an intelligent telemedicine platform with a chat-based interface; a machine learning-system that identifies impending train maintenance issues by analyzing diesel fuel samples; and Flippy, a robotic assistant for fast food preparation. For each one, Davenport and Miller describe in detail the work context for the system, interviewing job incumbents, managers, and technology vendors. Short “insight” chapters draw out common themes and consider the implications of human collaboration with smart systems…(More)”.

Essential Elements and Ethical Principles for Trustworthy Artificial Intelligence Adoption in Courts


Paper by Carlos E. Jimenez-Gomez and Jesus Cano Carrillo: “Tasks in courts have rapidly evolved from manual to digital work. In these innovation processes, theory and practice have demonstrated that adopting technology per se is not the right path. Innovation in courts requires specific plans for digital transformation, including analysis, programmatic changes, or skills. Artificial Intelligence (AI) is not an exception.
The use of AI in courts is not futuristic. From efficiency to decision-making support, AI-based tools are already being used by U.S. courts. To cite some examples, AI tools allow the discovery of divergences, disparities, and dissonances in jurisdictional activity. At a higher level, AI helps improve internal organization. AI helps with judicial decision consistency, exploiting a large judicial knowledge base in the form of big data, and it makes the judge’s work more agile with pattern and linguistic recognition in documents, identifying schemes and conceptualizations.

AI could bring considerable benefits to the judicial system. However, the risks and challenges are also
enormous, posing unique hurdles for user trust…

This article defines AI in relation to courts to understand challenges and implications and reviews AI components with a special focus on characteristics of trustworthy AI. It also examines the importance of a new policy and regulatory framework, and makes recommendations to avoid major problems…(More)”

Artificial Intelligence Needs Both Pragmatists and Blue-Sky Visionaries


Essay by Ben Shneiderman: “Artificial intelligence thinkers seem to emerge from two communities. One is what I call blue-sky visionaries who speculate about the future possibilities of the technology, invoking utopian fantasies to generate excitement. Blue-sky ideas are compelling but are often clouded over by unrealistic visions and the ethical challenges of what can and should be built.

In contrast, what I call muddy-boots pragmatists are problem- and solution-focused. They want to reduce the harms that widely used AI-infused systems can create. They focus on fixing biased and flawed systems, such as in facial recognition systems that often mistakenly identify people as criminals or violate privacy. The pragmatists want to reduce deadly medical mistakes that AI can make, and steer self-driving cars to be safe-driving cars. Their goal is also to improve AI-based decisions about mortgage loans, college admissions, job hiring and parole granting.

As a computer science professor with a long history of designing innovative applications that have been widely implemented, I believe that the blue-sky visionaries would benefit by taking the thoughtful messages of the muddy-boots realists. Combining the work of both camps is more likely to produce the beneficial outcomes that will lead to successful next-generation technologies.

While the futuristic thinking of the blue-sky speculators sparks our awe and earns much of the funding, muddy-boots thinking reminds us that some AI applications threaten privacy, spread misinformation and are decidedly racistsexist and otherwise ethically dubious. Machines are undeniably part of our future, but will they serve all future humans equally? I think the caution and practicality of the muddy-boots camp will benefit humanity in the short and long run by ensuring diversity and equality in the development of the algorithms that increasingly run our day-to-day lives. If blue-sky thinkers integrate the concerns of muddy-boots realists into their designs, they can create future technologies that are more likely to advance human values, rights and dignity…(More)”.

Supporting peace negotiations in the Yemen war through machine learning


Paper by Miguel Arana-Catania, Felix-Anselm van Lier and Rob Procter: “Today’s conflicts are becoming increasingly complex, fluid, and fragmented, often involving a host of national and international actors with multiple and often divergent interests. This development poses significant challenges for conflict mediation, as mediators struggle to make sense of conflict dynamics, such as the range of conflict parties and the evolution of their political positions, the distinction between relevant and less relevant actors in peace-making, or the identification of key conflict issues and their interdependence. International peace efforts appear ill-equipped to successfully address these challenges. While technology is already being experimented with and used in a range of conflict related fields, such as conflict predicting or information gathering, less attention has been given to how technology can contribute to conflict mediation. This case study contributes to emerging research on the use of state-of-the-art machine learning technologies and techniques in conflict mediation processes. Using dialogue transcripts from peace negotiations in Yemen, this study shows how machine-learning can effectively support mediating teams by providing them with tools for knowledge management, extraction and conflict analysis. Apart from illustrating the potential of machine learning tools in conflict mediation, the article also emphasizes the importance of interdisciplinary and participatory, cocreation methodology for the development of context-sensitive and targeted tools and to ensure meaningful and responsible implementation…(More)”.

Superhuman science: How artificial intelligence may impact innovation


Working paper by Ajay Agrawal, John McHale, and Alexander Oettl: “New product innovation in fields like drug discovery and material science can be characterized as combinatorial search over a vast range of possibilities. Modeling innovation as a costly multi-stage search process, we explore how improvements in Artificial Intelligence (AI) could affect the productivity of the discovery pipeline in allowing improved prioritization of innovations that flow through that pipeline. We show how AI-aided prediction can increase the expected value of innovation and can increase or decrease the demand for downstream testing, depending on the type of innovation, and examine how AI can reduce costs associated with well-defined bottlenecks in the discovery pipeline. Finally, we discuss the critical role that policy can play to mitigate potential market failures associated with access to and provision of data as well as the provision of training necessary to more closely approach the socially optimal level of productivity enhancing innovations enabled by this technology…(More)”.

Can Artificial Intelligence Improve Gender Equality? Evidence from a Natural Experiment


Paper by Zhengyang Bao and Difang Huang: “Difang HuangGender stereotypes and discriminatory practices in the education system are important reasons for women’s under-representation in many fields. How to create a gender-neutral learning environment when teachers’ gender composition and mindset are slow to change? Artificial intelligence (AI)’s recent development provides a way to achieve this goal. Engineers can make AI trainers appear gender neutral and not take gender-related information as input. We use data from a natural experiment where AI trainers replace some human teachers for a male-dominated strategic board game to test the effectiveness of such AI training. The introduction of AI improves boys’ and girls’ performance faster and reduces the pre-existing gender gap. Class recordings suggest that AI trainers’ gender-neutral emotional status can partly explain the improvement in gender quality. We provide the first evidence demonstrating AI’s potential to promote equality for society…(More)”.

What does AI Localism look like in action? A new series examining use cases on how cities govern AI


Series by Uma Kalkar, Sara Marcucci, Salwa Mansuri, and Stefaan Verhulst: “…We call local instances of AI governance ‘AI Localism.’ AI Localism refers to the governance actions—which include, but are not limited to, regulations, legislations, task forces, public committees, and locally-developed tools—taken by local decision-makers to address the use of AI within a city or regional state.

It is necessary to note, however, that the presence of AI Localism does not mean that robust national- and state-level AI policy are not needed. Whereas local governance seems fundamental in addressing local, micro-level issues, tailoring, for instance, by implementing policies for specific AI use circumstances, national AI governance should act as a key tool to complement local efforts and provide cities with a cohesive, guiding direction.

Finally, it is important to mention how AI Localism is not necessarily good governance of AI at the local level. Indeed, there have been several instances where local efforts to regulate and employ AI have encroached on public freedoms and hurt the public good….

Examining the current state of play in AI localism

To this end, The Governance Lab (The GovLab) has created the AI Localism project to collect a knowledge base and inform a taxonomy on the dimensions of local AI governance (see below). This initiative began in 2020 with the AI Localism canvas, which captures the frames under which local governance methods are developing. This series presents current examples of AI localism across the seven canvas frames of: 

  • Principles and Rights: foundational requirements and constraints of AI and algorithmic use in the public sector;
  • Laws and Policies: regulation to codify the above for public and private sectors;
  • Procurement: mandates around the use of AI in employment and hiring practices; 
  • Engagement: public involvement in AI use and limitations;
  • Accountability and Oversight: requirements for periodic reporting and auditing of AI use;
  • Transparency: consumer awareness about AI and algorithm use; and
  • Literacy: avenues to educate policymakers and the public about AI and data.

In this eight-part series, released weekly, we will present current examples of each frame of the AI localism canvas to identify themes among city- and state-led legislative actions. We end with ten lessons on AI localism for policymakers, data and AI experts, and the informed public to keep in mind as cities grow increasingly ‘smarter.’…(More)”.

AI Ethics: Global Perspectives


The AI Ethics: Global Perspectives course released three new video modules this week.

  • In “AI Ethics and Hate Speech”, Maha Jouini from the African Center for Artificial Intelligence and Digital Technology, explores the intersection between AI and hate speech in the context of the MENA region.
  • Maxime Ducret at the University of Lyon and Carl Mörch from the FARI AI Institute for the Common Good introduce the ethical implications of the use of AI technologies in the field of dentistry in their module “How Your Teeth, Your Smile and AI Ethics are Related”.
  • And finally, in “Ethics in AI for Peace”, AI for Peace’s Branka Panic talks about how the “algo age” brought with it many technical, legal, and ethical questions that exceeded the scope of existing peacebuilding and peacetech ethics frameworks.

To watch these lectures in full and register for the course, visit our website

The Low Threshold for Face Recognition in New Delhi


Article by Varsha Bansal: “Indian law enforcement is starting to place huge importance on facial recognition technology. Delhi police, looking into identifying people involved in civil unrest in northern India in the past few years, said that they would consider 80 percent accuracy and above as a “positive” match, according to documents obtained by the Internet Freedom Foundation through a public records request.

Facial recognition’s arrival in India’s capital region marks the expansion of Indian law enforcement officials using facial recognition data as evidence for potential prosecution, ringing alarm bells among privacy and civil liberties experts. There are also concerns about the 80 percent accuracy threshold, which critics say is arbitrary and far too low, given the potential consequences for those marked as a match. India’s lack of a comprehensive data protection law makes matters even more concerning.

The documents further state that even if a match is under 80 percent, it would be considered a “false positive” rather than a negative, which would make that individual “subject to due verification with other corroborative evidence.”

“This means that even though facial recognition is not giving them the result that they themselves have decided is the threshold, they will continue to investigate,” says Anushka Jain, associate policy counsel for surveillance and technology with the IFF, who filed for this information. “This could lead to harassment of the individual just because the technology is saying that they look similar to the person the police are looking for.” She added that this move by the Delhi Police could also result in harassment of people from communities that have been historically targeted by law enforcement officials…(More)”