The EU wants to put companies on the hook for harmful AI


Article by Melissa Heikkilä: “The EU is creating new rules to make it easier to sue AI companies for harm. A bill unveiled this week, which is likely to become law in a couple of years, is part of Europe’s push to prevent AI developers from releasing dangerous systems. And while tech companies complain it could have a chilling effect on innovation, consumer activists say it doesn’t go far enough. 

Powerful AI technologies are increasingly shaping our lives, relationships, and societies, and their harms are well documented. Social media algorithms boost misinformation, facial recognition systems are often highly discriminatory, and predictive AI systems that are used to approve or reject loans can be less accurate for minorities.  

The new bill, called the AI Liability Directive, will add teeth to the EU’s AI Act, which is set to become EU law around the same time. The AI Act would require extra checks for “high risk” uses of AI that have the most potential to harm people, including systems for policing, recruitment, or health care. 

The new liability bill would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers, and users of the technologies accountable, and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.

For example, job seekers who can prove that an AI system for screening résumés discriminated against them can ask a court to force the AI company to grant them access to information about the system so they can identify those responsible and find out what went wrong. Armed with this information, they can sue. 

The proposal still needs to snake its way through the EU’s legislative process, which will take a couple of years at least. It will be amended by members of the European Parliament and EU governments and will likely face intense lobbying from tech companies, which claim that such rules could have a “chilling” effect on innovation…(More)”.

AI Audit Washing and Accountability


Paper by Ellen P. Goodman and Julia Trehu: “Algorithmic decision systems, many using artificial intelligence, are reshaping the provision of private and public services across the globe. There is an urgent need for algorithmic governance. Jurisdictions are adopting or considering mandatory audits of these systems to assess compliance with legal and ethical standards or to provide assurance that the systems work as advertised. The hope is that audits will make public agencies and private firms accountable for the harms their algorithmic systems may cause, and thereby lead to harm reductions and more ethical tech. This hope will not be realized so long as the existing ambiguity around the term “audit” persists, and until audit standards are adequate and well-understood. The tacit expectation that algorithmic audits will function like established financial audits or newer human rights audits is fanciful at this stage. In the European Union, where algorithmic audit requirements are most advanced, to the United States, where they are nascent, core questions need to be addressed for audits to become reliable AI accountability mechanisms. In the absence of greater specification and more independent auditors, the risk is that AI auditing becomes AI audit washing. This paper first reports on proposed and enacted transatlantic AI or algorithmic audit provisions. It then draws on the technical, legal, and sociotechnical literature to address the who, what, why, and how of algorithmic audits, contributing to the literature advancing algorithmic governance…(More)“.

How to stop our cities from being turned into AI jungles


Stefaan G. Verhulst at The Conversation: “As artificial intelligence grows more ubiquitous, its potential and the challenges it presents are coming increasingly into focus. How we balance the risks and opportunities is shaping up as one of the defining questions of our era. In much the same way that cities have emerged as hubs of innovation in culture, politics, and commerce, so they are defining the frontiers of AI governance.

Some examples of how cities have been taking the lead include the Cities Coalition for Digital Rights, the Montreal Declaration for Responsible AI, and the Open Dialogue on AI Ethics. Others can be found in San Francisco’s ban of facial-recognition technology, and New York City’s push for regulating the sale of automated hiring systems and creation of an algorithms management and policy officer. Urban institutes, universities and other educational centres have also been forging ahead with a range of AI ethics initiatives.

These efforts point to an emerging paradigm that has been referred to as AI Localism. It’s a part of a larger phenomenon often called New Localism, which involves cities taking the lead in regulation and policymaking to develop context-specific approaches to a variety of problems and challenges. We have also seen an increased uptake of city-centric approaches within international law frameworks

Below are ten principles to help systematise our approach to AI Localism. Considered together, they add up to an incipient framework for implementing and assessing initiatives around the world:…(More)”.

Working with AI: Real Stories of Human-Machine Collaboration


Book by Thomas H. Davenport and Steven M. Miller: “This book breaks through both the hype and the doom-and-gloom surrounding automation and the deployment of artificial intelligence-enabled—“smart”—systems at work. Management and technology experts Thomas Davenport and Steven Miller show that, contrary to widespread predictions, prescriptions, and denunciations, AI is not primarily a job destroyer. Rather, AI changes the way we work—by taking over some tasks but not entire jobs, freeing people to do other, more important and more challenging work. By offering detailed, real-world case studies of AI-augmented jobs in settings that range from finance to the factory floor, Davenport and Miller also show that AI in the workplace is not the stuff of futuristic speculation. It is happening now to many companies and workers.These cases include a digital system for life insurance underwriting that analyzes applications and third-party data in real time, allowing human underwriters to focus on more complex cases; an intelligent telemedicine platform with a chat-based interface; a machine learning-system that identifies impending train maintenance issues by analyzing diesel fuel samples; and Flippy, a robotic assistant for fast food preparation. For each one, Davenport and Miller describe in detail the work context for the system, interviewing job incumbents, managers, and technology vendors. Short “insight” chapters draw out common themes and consider the implications of human collaboration with smart systems…(More)”.

Essential Elements and Ethical Principles for Trustworthy Artificial Intelligence Adoption in Courts


Paper by Carlos E. Jimenez-Gomez and Jesus Cano Carrillo: “Tasks in courts have rapidly evolved from manual to digital work. In these innovation processes, theory and practice have demonstrated that adopting technology per se is not the right path. Innovation in courts requires specific plans for digital transformation, including analysis, programmatic changes, or skills. Artificial Intelligence (AI) is not an exception.
The use of AI in courts is not futuristic. From efficiency to decision-making support, AI-based tools are already being used by U.S. courts. To cite some examples, AI tools allow the discovery of divergences, disparities, and dissonances in jurisdictional activity. At a higher level, AI helps improve internal organization. AI helps with judicial decision consistency, exploiting a large judicial knowledge base in the form of big data, and it makes the judge’s work more agile with pattern and linguistic recognition in documents, identifying schemes and conceptualizations.

AI could bring considerable benefits to the judicial system. However, the risks and challenges are also
enormous, posing unique hurdles for user trust…

This article defines AI in relation to courts to understand challenges and implications and reviews AI components with a special focus on characteristics of trustworthy AI. It also examines the importance of a new policy and regulatory framework, and makes recommendations to avoid major problems…(More)”

Artificial Intelligence Needs Both Pragmatists and Blue-Sky Visionaries


Essay by Ben Shneiderman: “Artificial intelligence thinkers seem to emerge from two communities. One is what I call blue-sky visionaries who speculate about the future possibilities of the technology, invoking utopian fantasies to generate excitement. Blue-sky ideas are compelling but are often clouded over by unrealistic visions and the ethical challenges of what can and should be built.

In contrast, what I call muddy-boots pragmatists are problem- and solution-focused. They want to reduce the harms that widely used AI-infused systems can create. They focus on fixing biased and flawed systems, such as in facial recognition systems that often mistakenly identify people as criminals or violate privacy. The pragmatists want to reduce deadly medical mistakes that AI can make, and steer self-driving cars to be safe-driving cars. Their goal is also to improve AI-based decisions about mortgage loans, college admissions, job hiring and parole granting.

As a computer science professor with a long history of designing innovative applications that have been widely implemented, I believe that the blue-sky visionaries would benefit by taking the thoughtful messages of the muddy-boots realists. Combining the work of both camps is more likely to produce the beneficial outcomes that will lead to successful next-generation technologies.

While the futuristic thinking of the blue-sky speculators sparks our awe and earns much of the funding, muddy-boots thinking reminds us that some AI applications threaten privacy, spread misinformation and are decidedly racistsexist and otherwise ethically dubious. Machines are undeniably part of our future, but will they serve all future humans equally? I think the caution and practicality of the muddy-boots camp will benefit humanity in the short and long run by ensuring diversity and equality in the development of the algorithms that increasingly run our day-to-day lives. If blue-sky thinkers integrate the concerns of muddy-boots realists into their designs, they can create future technologies that are more likely to advance human values, rights and dignity…(More)”.

Supporting peace negotiations in the Yemen war through machine learning


Paper by Miguel Arana-Catania, Felix-Anselm van Lier and Rob Procter: “Today’s conflicts are becoming increasingly complex, fluid, and fragmented, often involving a host of national and international actors with multiple and often divergent interests. This development poses significant challenges for conflict mediation, as mediators struggle to make sense of conflict dynamics, such as the range of conflict parties and the evolution of their political positions, the distinction between relevant and less relevant actors in peace-making, or the identification of key conflict issues and their interdependence. International peace efforts appear ill-equipped to successfully address these challenges. While technology is already being experimented with and used in a range of conflict related fields, such as conflict predicting or information gathering, less attention has been given to how technology can contribute to conflict mediation. This case study contributes to emerging research on the use of state-of-the-art machine learning technologies and techniques in conflict mediation processes. Using dialogue transcripts from peace negotiations in Yemen, this study shows how machine-learning can effectively support mediating teams by providing them with tools for knowledge management, extraction and conflict analysis. Apart from illustrating the potential of machine learning tools in conflict mediation, the article also emphasizes the importance of interdisciplinary and participatory, cocreation methodology for the development of context-sensitive and targeted tools and to ensure meaningful and responsible implementation…(More)”.

Superhuman science: How artificial intelligence may impact innovation


Working paper by Ajay Agrawal, John McHale, and Alexander Oettl: “New product innovation in fields like drug discovery and material science can be characterized as combinatorial search over a vast range of possibilities. Modeling innovation as a costly multi-stage search process, we explore how improvements in Artificial Intelligence (AI) could affect the productivity of the discovery pipeline in allowing improved prioritization of innovations that flow through that pipeline. We show how AI-aided prediction can increase the expected value of innovation and can increase or decrease the demand for downstream testing, depending on the type of innovation, and examine how AI can reduce costs associated with well-defined bottlenecks in the discovery pipeline. Finally, we discuss the critical role that policy can play to mitigate potential market failures associated with access to and provision of data as well as the provision of training necessary to more closely approach the socially optimal level of productivity enhancing innovations enabled by this technology…(More)”.

Can Artificial Intelligence Improve Gender Equality? Evidence from a Natural Experiment


Paper by Zhengyang Bao and Difang Huang: “Difang HuangGender stereotypes and discriminatory practices in the education system are important reasons for women’s under-representation in many fields. How to create a gender-neutral learning environment when teachers’ gender composition and mindset are slow to change? Artificial intelligence (AI)’s recent development provides a way to achieve this goal. Engineers can make AI trainers appear gender neutral and not take gender-related information as input. We use data from a natural experiment where AI trainers replace some human teachers for a male-dominated strategic board game to test the effectiveness of such AI training. The introduction of AI improves boys’ and girls’ performance faster and reduces the pre-existing gender gap. Class recordings suggest that AI trainers’ gender-neutral emotional status can partly explain the improvement in gender quality. We provide the first evidence demonstrating AI’s potential to promote equality for society…(More)”.

What does AI Localism look like in action? A new series examining use cases on how cities govern AI


Series by Uma Kalkar, Sara Marcucci, Salwa Mansuri, and Stefaan Verhulst: “…We call local instances of AI governance ‘AI Localism.’ AI Localism refers to the governance actions—which include, but are not limited to, regulations, legislations, task forces, public committees, and locally-developed tools—taken by local decision-makers to address the use of AI within a city or regional state.

It is necessary to note, however, that the presence of AI Localism does not mean that robust national- and state-level AI policy are not needed. Whereas local governance seems fundamental in addressing local, micro-level issues, tailoring, for instance, by implementing policies for specific AI use circumstances, national AI governance should act as a key tool to complement local efforts and provide cities with a cohesive, guiding direction.

Finally, it is important to mention how AI Localism is not necessarily good governance of AI at the local level. Indeed, there have been several instances where local efforts to regulate and employ AI have encroached on public freedoms and hurt the public good….

Examining the current state of play in AI localism

To this end, The Governance Lab (The GovLab) has created the AI Localism project to collect a knowledge base and inform a taxonomy on the dimensions of local AI governance (see below). This initiative began in 2020 with the AI Localism canvas, which captures the frames under which local governance methods are developing. This series presents current examples of AI localism across the seven canvas frames of: 

  • Principles and Rights: foundational requirements and constraints of AI and algorithmic use in the public sector;
  • Laws and Policies: regulation to codify the above for public and private sectors;
  • Procurement: mandates around the use of AI in employment and hiring practices; 
  • Engagement: public involvement in AI use and limitations;
  • Accountability and Oversight: requirements for periodic reporting and auditing of AI use;
  • Transparency: consumer awareness about AI and algorithm use; and
  • Literacy: avenues to educate policymakers and the public about AI and data.

In this eight-part series, released weekly, we will present current examples of each frame of the AI localism canvas to identify themes among city- and state-led legislative actions. We end with ten lessons on AI localism for policymakers, data and AI experts, and the informed public to keep in mind as cities grow increasingly ‘smarter.’…(More)”.