Public procurement of artificial intelligence systems: new risks and future proofing


Paper by Merve Hickok: “Public entities around the world are increasingly deploying artificial intelligence (AI) and algorithmic decision-making systems to provide public services or to use their enforcement powers. The rationale for the public sector to use these systems is similar to private sector: increase efficiency and speed of transactions and lower the costs. However, public entities are first and foremost established to meet the needs of the members of society and protect the safety, fundamental rights, and wellbeing of those they serve. Currently AI systems are deployed by the public sector at various administrative levels without robust due diligence, monitoring, or transparency. This paper critically maps out the challenges in procurement of AI systems by public entities and the long-term implications necessitating AI-specific procurement guidelines and processes. This dual-prong exploration includes the new complexities and risks introduced by AI systems, and the institutional capabilities impacting the decision-making process. AI-specific public procurement guidelines are urgently needed to protect fundamental rights and due process…(More)”.

Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans


Paper by John Nay: “Artificial Intelligence (AI) capabilities are rapidly advancing. Highly capable AI could cause radically different futures depending on how it is developed and deployed. We are unable to specify human goals and societal values in a way that reliably directs AI behavior. Specifying the desirability (value) of an AI system taking a particular action in a particular state of the world is unwieldy beyond a very limited set of value-action-states. The purpose of machine learning is to train on a subset of states and have the resulting agent generalize an ability to choose high value actions in unencountered circumstances. But the function ascribing values to an agent’s actions during training is inevitably an incredibly incomplete encapsulation of human values, and the training process is a sparse exploration of states pertinent to all possible futures. Therefore, after training, AI is deployed with a coarse map of human preferred territory and will often choose actions unaligned with our preferred paths.

Law-making and legal interpretation form a computational engine that converts opaque human intentions and values into legible directives. Law Informs Code is the research agenda capturing complex computational legal processes, and embedding them in AI. Similar to how parties to a legal contract cannot foresee every potential “if-then” contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify “if-then” rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations, i.e., to generalize expectations regarding actions taken to unspecified states of the world. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code.

We describe how data generated by legal processes and the practices of law (methods of law-making, statutory interpretation, contract drafting, applications of standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment, harnessing public law as an up-to-date knowledge base of democratically endorsed values ascribed to state-action pairs. Although law is partly a reflection of historically contingent political power – and thus not a perfect aggregation of citizen preferences – if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. Other data sources suggested for AI alignment – surveys of preferences, humans labeling “ethical” situations, or (most commonly) the implicit beliefs of the AI system designers – lack an authoritative source of synthesized preference aggregation. Law is grounded in a verifiable resolution: ultimately obtained from a court opinion, but short of that, elicited from legal experts. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning…(More)”.

Google’s new AI can hear a snippet of song—and then keep on playing


Article by Tammy Xu: “The new AI system can generate natural sounds and voices after being prompted with a few seconds of audio.

AudioLM, developed by Google researchers, produces sounds that match the style of reminders, including complex sounds like piano music or human voices, in a way that is nearly indistinguishable from original record. The technique shows promise in terms of speeding up the training of AI to generate audio, and it could eventually be used to automatically generate music to accompany videos.

AI-generated audio has become ubiquitous: voices on home assistants like Alexa use natural language processing. AI music systems like OpenAI’s Jukebox have produced impressive results, but most current techniques require people to prepare transcriptions and label training data based on text, which does It takes a lot of time and human labor. For example, Jukebox uses text-based data to generate lyrics.

AudioLM, described in a non-peer-reviewed paper Last month was different: it didn’t require transcription or labeling. Instead, an audio database is fed into the program, and machine learning is used to compress the audio files into audio clips, called “tokens,” without losing too much information. This encrypted training data is then fed into a machine learning model that uses natural language processing to learn the audio samples.

To generate sound, a few seconds of audio is fed into AudioLM, then predict what happens next. This process is similar to how language models like GPT-3 predict sentences and words that often follow one another.

Sound clip released by the team sounds quite natural. In particular, piano music created with AudioLM sounded more fluid than piano music created with existing AI techniques, which tends to sound chaotic…(More)”.

Can critical policy studies outsmart AI? Research agenda on artificial intelligence technologies and public policy


Paper by Regine Paul: “The insertion of artificial intelligence technologies (AITs) and data-driven automation in public policymaking should be a metaphorical wake-up call for critical policy analysts. Both its wide representation as techno-solutionist remedy in otherwise slow, inefficient, and biased public decision-making and its regulation as a matter of rational risk analysis are conceptually flawed and democratically problematic. To ‘outsmart’ AI, this article stimulates the articulation of a critical research agenda on AITs and public policy, outlining three interconnected lines of inquiry for future research: (1) interpretivist disclosure of the norms and values that shape perceptions and uses of AITs in public policy, (2) exploration of AITs in public policy as a contingent practice of complex human-machine interactions, and (3) emancipatory critique of how ‘smart’ governance projects and AIT regulation interact with (global) inequalities and power relations…(More)”.

AI & Cities: Risks, Applications and Governance


Report by UN Habitat: “Artificial intelligence is manifesting at an unprecedented rate in urban centers, often with significant risks and little oversight. Using AI technologies without the appropriate governance mechanisms and without adequate consideration of how they affect people’s human rights can have negative, even catastrophic, effects.

This report is part of UN-Habitat’s strategy for guiding local authorities in realizing a people-centered digital transformation process in their cities and settlements…(More)”.

Blueprint for an AI Bill of Rights


The White House: “…To advance President Biden’s vision, the White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by From Principles to Practice—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.

  • Safe and Effective Systems
  • Data Privacy
  • Notice and Explanation
  • Algorithmic Discrimination Protections
  • Human Alternatives, Consideration, and Fallback…(More)”.

The EU wants to put companies on the hook for harmful AI


Article by Melissa Heikkilä: “The EU is creating new rules to make it easier to sue AI companies for harm. A bill unveiled this week, which is likely to become law in a couple of years, is part of Europe’s push to prevent AI developers from releasing dangerous systems. And while tech companies complain it could have a chilling effect on innovation, consumer activists say it doesn’t go far enough. 

Powerful AI technologies are increasingly shaping our lives, relationships, and societies, and their harms are well documented. Social media algorithms boost misinformation, facial recognition systems are often highly discriminatory, and predictive AI systems that are used to approve or reject loans can be less accurate for minorities.  

The new bill, called the AI Liability Directive, will add teeth to the EU’s AI Act, which is set to become EU law around the same time. The AI Act would require extra checks for “high risk” uses of AI that have the most potential to harm people, including systems for policing, recruitment, or health care. 

The new liability bill would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers, and users of the technologies accountable, and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.

For example, job seekers who can prove that an AI system for screening résumés discriminated against them can ask a court to force the AI company to grant them access to information about the system so they can identify those responsible and find out what went wrong. Armed with this information, they can sue. 

The proposal still needs to snake its way through the EU’s legislative process, which will take a couple of years at least. It will be amended by members of the European Parliament and EU governments and will likely face intense lobbying from tech companies, which claim that such rules could have a “chilling” effect on innovation…(More)”.

AI Audit Washing and Accountability


Paper by Ellen P. Goodman and Julia Trehu: “Algorithmic decision systems, many using artificial intelligence, are reshaping the provision of private and public services across the globe. There is an urgent need for algorithmic governance. Jurisdictions are adopting or considering mandatory audits of these systems to assess compliance with legal and ethical standards or to provide assurance that the systems work as advertised. The hope is that audits will make public agencies and private firms accountable for the harms their algorithmic systems may cause, and thereby lead to harm reductions and more ethical tech. This hope will not be realized so long as the existing ambiguity around the term “audit” persists, and until audit standards are adequate and well-understood. The tacit expectation that algorithmic audits will function like established financial audits or newer human rights audits is fanciful at this stage. In the European Union, where algorithmic audit requirements are most advanced, to the United States, where they are nascent, core questions need to be addressed for audits to become reliable AI accountability mechanisms. In the absence of greater specification and more independent auditors, the risk is that AI auditing becomes AI audit washing. This paper first reports on proposed and enacted transatlantic AI or algorithmic audit provisions. It then draws on the technical, legal, and sociotechnical literature to address the who, what, why, and how of algorithmic audits, contributing to the literature advancing algorithmic governance…(More)“.

How to stop our cities from being turned into AI jungles


Stefaan G. Verhulst at The Conversation: “As artificial intelligence grows more ubiquitous, its potential and the challenges it presents are coming increasingly into focus. How we balance the risks and opportunities is shaping up as one of the defining questions of our era. In much the same way that cities have emerged as hubs of innovation in culture, politics, and commerce, so they are defining the frontiers of AI governance.

Some examples of how cities have been taking the lead include the Cities Coalition for Digital Rights, the Montreal Declaration for Responsible AI, and the Open Dialogue on AI Ethics. Others can be found in San Francisco’s ban of facial-recognition technology, and New York City’s push for regulating the sale of automated hiring systems and creation of an algorithms management and policy officer. Urban institutes, universities and other educational centres have also been forging ahead with a range of AI ethics initiatives.

These efforts point to an emerging paradigm that has been referred to as AI Localism. It’s a part of a larger phenomenon often called New Localism, which involves cities taking the lead in regulation and policymaking to develop context-specific approaches to a variety of problems and challenges. We have also seen an increased uptake of city-centric approaches within international law frameworks

Below are ten principles to help systematise our approach to AI Localism. Considered together, they add up to an incipient framework for implementing and assessing initiatives around the world:…(More)”.

Working with AI: Real Stories of Human-Machine Collaboration


Book by Thomas H. Davenport and Steven M. Miller: “This book breaks through both the hype and the doom-and-gloom surrounding automation and the deployment of artificial intelligence-enabled—“smart”—systems at work. Management and technology experts Thomas Davenport and Steven Miller show that, contrary to widespread predictions, prescriptions, and denunciations, AI is not primarily a job destroyer. Rather, AI changes the way we work—by taking over some tasks but not entire jobs, freeing people to do other, more important and more challenging work. By offering detailed, real-world case studies of AI-augmented jobs in settings that range from finance to the factory floor, Davenport and Miller also show that AI in the workplace is not the stuff of futuristic speculation. It is happening now to many companies and workers.These cases include a digital system for life insurance underwriting that analyzes applications and third-party data in real time, allowing human underwriters to focus on more complex cases; an intelligent telemedicine platform with a chat-based interface; a machine learning-system that identifies impending train maintenance issues by analyzing diesel fuel samples; and Flippy, a robotic assistant for fast food preparation. For each one, Davenport and Miller describe in detail the work context for the system, interviewing job incumbents, managers, and technology vendors. Short “insight” chapters draw out common themes and consider the implications of human collaboration with smart systems…(More)”.