Paper by Merve Hickok: “Public entities around the world are increasingly deploying artificial intelligence (AI) and algorithmic decision-making systems to provide public services or to use their enforcement powers. The rationale for the public sector to use these systems is similar to private sector: increase efficiency and speed of transactions and lower the costs. However, public entities are first and foremost established to meet the needs of the members of society and protect the safety, fundamental rights, and wellbeing of those they serve. Currently AI systems are deployed by the public sector at various administrative levels without robust due diligence, monitoring, or transparency. This paper critically maps out the challenges in procurement of AI systems by public entities and the long-term implications necessitating AI-specific procurement guidelines and processes. This dual-prong exploration includes the new complexities and risks introduced by AI systems, and the institutional capabilities impacting the decision-making process. AI-specific public procurement guidelines are urgently needed to protect fundamental rights and due process…(More)”.
Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans
Paper by John Nay: “Artificial Intelligence (AI) capabilities are rapidly advancing. Highly capable AI could cause radically different futures depending on how it is developed and deployed. We are unable to specify human goals and societal values in a way that reliably directs AI behavior. Specifying the desirability (value) of an AI system taking a particular action in a particular state of the world is unwieldy beyond a very limited set of value-action-states. The purpose of machine learning is to train on a subset of states and have the resulting agent generalize an ability to choose high value actions in unencountered circumstances. But the function ascribing values to an agent’s actions during training is inevitably an incredibly incomplete encapsulation of human values, and the training process is a sparse exploration of states pertinent to all possible futures. Therefore, after training, AI is deployed with a coarse map of human preferred territory and will often choose actions unaligned with our preferred paths.
Law-making and legal interpretation form a computational engine that converts opaque human intentions and values into legible directives. Law Informs Code is the research agenda capturing complex computational legal processes, and embedding them in AI. Similar to how parties to a legal contract cannot foresee every potential “if-then” contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify “if-then” rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations, i.e., to generalize expectations regarding actions taken to unspecified states of the world. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code.
We describe how data generated by legal processes and the practices of law (methods of law-making, statutory interpretation, contract drafting, applications of standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment, harnessing public law as an up-to-date knowledge base of democratically endorsed values ascribed to state-action pairs. Although law is partly a reflection of historically contingent political power – and thus not a perfect aggregation of citizen preferences – if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. Other data sources suggested for AI alignment – surveys of preferences, humans labeling “ethical” situations, or (most commonly) the implicit beliefs of the AI system designers – lack an authoritative source of synthesized preference aggregation. Law is grounded in a verifiable resolution: ultimately obtained from a court opinion, but short of that, elicited from legal experts. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning…(More)”.
Google’s new AI can hear a snippet of song—and then keep on playing
Article by Tammy Xu: “The new AI system can generate natural sounds and voices after being prompted with a few seconds of audio.
AudioLM, developed by Google researchers, produces sounds that match the style of reminders, including complex sounds like piano music or human voices, in a way that is nearly indistinguishable from original record. The technique shows promise in terms of speeding up the training of AI to generate audio, and it could eventually be used to automatically generate music to accompany videos.
AI-generated audio has become ubiquitous: voices on home assistants like Alexa use natural language processing. AI music systems like OpenAI’s Jukebox have produced impressive results, but most current techniques require people to prepare transcriptions and label training data based on text, which does It takes a lot of time and human labor. For example, Jukebox uses text-based data to generate lyrics.
AudioLM, described in a non-peer-reviewed paper Last month was different: it didn’t require transcription or labeling. Instead, an audio database is fed into the program, and machine learning is used to compress the audio files into audio clips, called “tokens,” without losing too much information. This encrypted training data is then fed into a machine learning model that uses natural language processing to learn the audio samples.
To generate sound, a few seconds of audio is fed into AudioLM, then predict what happens next. This process is similar to how language models like GPT-3 predict sentences and words that often follow one another.
Sound clip released by the team sounds quite natural. In particular, piano music created with AudioLM sounded more fluid than piano music created with existing AI techniques, which tends to sound chaotic…(More)”.
The European Union-U.S. Data Privacy Framework
White House Fact Sheet: “Today, President Biden signed an Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities (E.O.) directing the steps that the United States will take to implement the U.S. commitments under the European Union-U.S. Data Privacy Framework (EU-U.S. DPF) announced by President Biden and European Commission President von der Leyen in March of 2022.
Transatlantic data flows are critical to enabling the $7.1 trillion EU-U.S. economic relationship. The EU-U.S. DPF will restore an important legal basis for transatlantic data flows by addressing concerns that the Court of Justice of the European Union raised in striking down the prior EU-U.S. Privacy Shield framework as a valid data transfer mechanism under EU law.
The Executive Order bolsters an already rigorous array of privacy and civil liberties safeguards for U.S. signals intelligence activities. It also creates an independent and binding mechanism enabling individuals in qualifying states and regional economic integration organizations, as designated under the E.O., to seek redress if they believe their personal data was collected through U.S. signals intelligence in a manner that violated applicable U.S. law.
U.S. and EU companies large and small across all sectors of the economy rely upon cross-border data flows to participate in the digital economy and expand economic opportunities. The EU-U.S. DPF represents the culmination of a joint effort by the United States and the European Commission to restore trust and stability to transatlantic data flows and reflects the strength of the enduring EU-U.S. relationship based on our shared values…(More)”.
Can critical policy studies outsmart AI? Research agenda on artificial intelligence technologies and public policy
Paper by Regine Paul: “The insertion of artificial intelligence technologies (AITs) and data-driven automation in public policymaking should be a metaphorical wake-up call for critical policy analysts. Both its wide representation as techno-solutionist remedy in otherwise slow, inefficient, and biased public decision-making and its regulation as a matter of rational risk analysis are conceptually flawed and democratically problematic. To ‘outsmart’ AI, this article stimulates the articulation of a critical research agenda on AITs and public policy, outlining three interconnected lines of inquiry for future research: (1) interpretivist disclosure of the norms and values that shape perceptions and uses of AITs in public policy, (2) exploration of AITs in public policy as a contingent practice of complex human-machine interactions, and (3) emancipatory critique of how ‘smart’ governance projects and AIT regulation interact with (global) inequalities and power relations…(More)”.
Governing the Environment-Related Data Space
Stefaan G. Verhulst, Anthony Zacharzewski and Christian Hudson at Data & Policy: “Today, The GovLab and The Democratic Society published their report, “Governing the Environment-Related Data Space”, written by Jörn Fritzenkötter, Laura Hohoff, Paola Pierri, Stefaan G. Verhulst, Andrew Young, and Anthony Zacharzewski . The report captures the findings of their joint research centered on the responsible and effective reuse of environment-related data to achieve greater social and environmental impact.

Environment-related data (ERD) encompasses numerous kinds of data across a wide range of sectors. It can best be defined as data related to any element of the Driver-Pressure-State-Impact-Response (DPSIR) Framework. If leveraged effectively, this wealth of data could help society establish a sustainable economy, take action against climate change, and support environmental justice — as recognized recently by French President Emmanuel Macron and UN Secretary General’s Special Envoy for Climate Ambition and Solutions Michael R. Bloomberg when establishing the Climate Data Steering Committee.
While several actors are working to improve access to, as well as promote the (re)use of, ERD data, two key challenges that hamper progress on this front are data asymmetries and data enclosures. Data asymmetries occur due to the ever-increasing amounts of ERD scattered across diverse actors, with larger and more powerful stakeholders often maintaining unequal access. Asymmetries lead to problems with accessibility and findability (data enclosures), leading to limited sharing and collaboration, and stunting the ability to use data and maximize its potential to address public ills.
The risks and costs of data enclosure and data asymmetries are high. Information bottlenecks cause resources to be misallocated, slow scientific progress, and limit our understanding of the environment.
A fit-for-purpose governance framework could offer a solution to these barriers by creating space for more systematic, sustainable, and responsible data sharing and collaboration. Better data sharing can in turn ease information flows, mitigate asymmetries, and minimize data enclosures.
And there are some clear criteria for an effective governance framework…(More)”
AI & Cities: Risks, Applications and Governance
Report by UN Habitat: “Artificial intelligence is manifesting at an unprecedented rate in urban centers, often with significant risks and little oversight. Using AI technologies without the appropriate governance mechanisms and without adequate consideration of how they affect people’s human rights can have negative, even catastrophic, effects.
This report is part of UN-Habitat’s strategy for guiding local authorities in realizing a people-centered digital transformation process in their cities and settlements…(More)”.
Call it data liberation day: Patients can now access all their health records digitally
Article by Casey Ross: “The American Revolution had July 4. The allies had D-Day. And now U.S. patients, held down for decades by information hoarders, can rally around a new turning point, October 6, 2022 — the day they got their health data back.
Under federal rules taking effect Thursday, health care organizations must give patients unfettered access to their full health records in digital format. No more long delays. No more fax machines. No more exorbitant charges for printed pages.
Just the data, please — now…The new federal rules — passed under the 21st Century Cures Act — are designed to shift the balance of power to ensure that patients can not only get their data, but also choose who else to share it with. It is the jumping-off point for a patient-mediated data economy that lets consumers in health care benefit from the fluidity they’ve had for decades in banking: they can move their information easily and electronically, and link their accounts to new services and software applications.
“To think that we actually have greater transparency about our personal finances than about our own health is quite an indictment,” said Isaac Kohane, a professor of biomedical informatics at Harvard Medical School. “This will go some distance toward reversing that.”
Even with the rules now in place, health data experts said change will not be fast or easy. Providers and other data holders — who have dug in their heels at every step — can still withhold information under certain exceptions. And many questions remain about protocols for sharing digital records, how to verify access rights, and even what it means to give patients all their data. Does that extend to every measurement in the ICU? Every log entry? Every email? And how will it all get standardized?…(More)”
Blueprint for an AI Bill of Rights
The White House: “…To advance President Biden’s vision, the White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by From Principles to Practice—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.
- Safe and Effective Systems
- Data Privacy
- Notice and Explanation
- Algorithmic Discrimination Protections
- Human Alternatives, Consideration, and Fallback…(More)”.
The EU wants to put companies on the hook for harmful AI
Article by Melissa Heikkilä: “The EU is creating new rules to make it easier to sue AI companies for harm. A bill unveiled this week, which is likely to become law in a couple of years, is part of Europe’s push to prevent AI developers from releasing dangerous systems. And while tech companies complain it could have a chilling effect on innovation, consumer activists say it doesn’t go far enough.
Powerful AI technologies are increasingly shaping our lives, relationships, and societies, and their harms are well documented. Social media algorithms boost misinformation, facial recognition systems are often highly discriminatory, and predictive AI systems that are used to approve or reject loans can be less accurate for minorities.
The new bill, called the AI Liability Directive, will add teeth to the EU’s AI Act, which is set to become EU law around the same time. The AI Act would require extra checks for “high risk” uses of AI that have the most potential to harm people, including systems for policing, recruitment, or health care.
The new liability bill would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers, and users of the technologies accountable, and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.
For example, job seekers who can prove that an AI system for screening résumés discriminated against them can ask a court to force the AI company to grant them access to information about the system so they can identify those responsible and find out what went wrong. Armed with this information, they can sue.
The proposal still needs to snake its way through the EU’s legislative process, which will take a couple of years at least. It will be amended by members of the European Parliament and EU governments and will likely face intense lobbying from tech companies, which claim that such rules could have a “chilling” effect on innovation…(More)”.