The Inevitability of AI Law & Policy: Preparing Government for the Era of Autonomous Machines


Public Knowledge: “Today, we’re happy to announce our newest white paper, “The Inevitability of AI Law & Policy: Preparing Government for the Era of Autonomous Machines,” by Public Knowledge General Counsel Ryan Clough. The paper argues that the rapid and pervasive rise of artificial intelligence risks exploiting the most marginalized and vulnerable in our society. To mitigate these harms, Clough advocates for a new federal authority to help the U.S. government implement fair and equitable AI. Such an authority should provide the rest of the government with the expertise and experience needed to achieve five goals crucial to building ethical AI systems:

  • Boosting sector-specific regulators and confronting overarching policy challenges raised by AI;
  • Protecting public values in government procurement and implementation of AI;
  • Attracting AI practitioners to civil service, and building durable and centralized AI expertise within government;
  • Identifying major gaps in the laws and regulatory frameworks that govern AI; and
  • Coordinating strategies and priorities for international AI governance.

“Any individual can be misjudged and mistreated by artificial intelligence,” Clough explains, “but the record to date indicates that it is significantly more likely to happen to the less powerful, who also have less recourse to do anything about it.” The paper argues that a new federal authority is the best way to meet the profound and novel challenges AI poses for us all….(More)”.

Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security


Paper by Robert Chesney and Danielle Keats Citron: “Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection.

Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors. While deep-fake technology will bring with it certain benefits, it also will introduce many harms. The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well.

Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it. We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments. We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions….(More)”.

Babbage among the insurers: big 19th-century data and the public interest.


Wilson, D. C. S.  at the History of the Human Sciences: “This article examines life assurance and the politics of ‘big data’ in mid-19th-century Britain. The datasets generated by life assurance companies were vast archives of information about human longevity. Actuaries distilled these archives into mortality tables – immensely valuable tools for predicting mortality and so pricing risk. The status of the mortality table was ambiguous, being both a public and a private object: often computed from company records they could also be extrapolated from quasi-public projects such as the Census or clerical records. Life assurance more generally straddled the line between private enterprise and collective endeavour, though its advocates stressed the public interest in its success. Reforming actuaries such as Thomas Rowe Edmonds wanted the data on which mortality tables were based to be made publicly available, but faced resistance. Such resistance undermined insurers’ claims to be scientific in spirit and hindered Edmonds’s personal quest for a law of mortality. Edmonds pushed instead for an open actuarial science alongside fellow-travellers at the Statistical Society of London, which was populated by statisticians such as William Farr (whose subsequent work, it is argued, was influenced by Edmonds) as well as by radical mathematicians such as Charles Babbage. The article explores Babbage’s little-known foray into the world of insurance, both as a budding actuary but also as a fierce critic of the industry. These debates over the construction, ownership, and accessibility of insurance datasets show that concern about the politics of big data did not begin in the 21st century….(More)”.

The Administrative State


Interview with Paul Tucker by Benedict King: “Iyour book, you place what you call the “administrative state” at the heart of the political dilemmas facing the liberal political order. Could you tell us what you mean by ‘the administrative state’ and the dilemmas it poses for democratic societies?

This is about the legitimacy of the structure of government that has developed in Western democracies. The ‘administrative state’ is simply the machinery for implementing policy and law. What matters is that much of it—including many regulators and central banks—is no longer under direct ministerial control. They are not part of a ministerial department. They are not answerable day-to-day, minute-by-minute to the prime minister or, in other countries, the president.

When I was young, in Europe at least, these arm’s-length agencies were a small part of government, but now they are a very big part. Over here, that transformation has come about over the past thirty-odd years, since the 1990s, whereas in America it goes back to the 1880s, and especially the 1930s New Deal reforms of President Roosevelt.

“The ‘administrative state’ is simply the machinery for implementing policy and law. ”

In the United Kingdom we used to call these agencies ‘quangos’, but that acronym trivialises the issue. Today, many—in the US, probably most—of the laws to which businesses and even individuals are subject are written and enforced by regulatory agencies, part of the administrative state, rather than passed by Parliament or Congress and enforced by the elected executive. That would surprise John Locke, Montesquieu and James Madison, who developed the principles associated with the separation of powers and constitutionalism.

To some extent, these changes were driven by a perceived need to turn to ‘expertise’. But the effect has been to shift more of our government away from our elected representatives and to unelected technocrats. An underlying premise at the heart of my book (although not something that I can prove) is that, since any and every part of government eventually fails—and may fail very badly, as we saw with the collapse of the financial system in 2008—there is a risk that people will get fed up with this shift to governance by unelected experts. The people will get fed up with their lives being affected so much by people who they didn’t have a chance to vote for and can’t vote out. If that happened, it would be dangerous as the genius of representative democracy is that it separates how we as citizens feel about the system of government from how we feel about the government of the day. So how can we avoid that without losing the benefits of delegation? That is what the debate about the administrative state is ultimately about: its dryness belies its importance to how we govern ourselves.

“The genius of representative democracy is that it separates how we as citizens feel about the system of government from how we feel about the government of the day”

It matters, therefore, that the array of agencies in the administrative state varies enormously in the degree to which they are formally insulated from politics. My book Unelected Power is about ‘independent agencies’, by which I mean an agency that is insulated day-to-day from both the legislative branch (Parliament or Congress) and also from the executive branch of government (the prime minister or president). Central banks are the most important example of such independent agencies in modern times, wielding a wide range of monetary and regulatory powers….(More + selection of five books to read)”.

Political Lawyering for the 21st Century


Paper by Deborah N. Archer: “Legal education purports to prepare the next generation of lawyers capable of tackling the urgent and complex social justice challenges of our time. But law schools are failing in that public promise. Clinical education offers the best opportunity to overcome those failings by teaching the skills lawyers need to tackle systemic and interlocking legal and social problems. But too often even clinical education falls short: it adheres to conventional pedagogical methodologies that are overly narrow and, in the end, limit students’ abilities to manage today’s complex racial and social justice issues. This article contends that clinical education needs to embrace and reimagine political lawyering for the 21st century in order to prepare aspiring lawyers to tackle both new and chronic issues of injustice through a broad array of advocacy strategies….(More)”.

When the Rule of Law Is Not Working


A conversation with Karl Sigmund at Edge: “…Now, I’m getting back to evolutionary game theory, the theory of evolution of cooperation and the social contract, and how the social contract can be subverted by corruption. That’s what interests me most currently. Of course, that is not a new story. I believe it explains a lot of what I see happening in my field and in related fields. The ideas that survive are the ideas that are fruitful in the sense of quickly producing a lot of publications, and that’s not necessarily correlated with these ideas being important to advancing science.

Corruption is a wicked problem, wicked in the technical sense of sociology, and it’s not something that will go away. You can reduce it, but as soon as you stop your efforts, it comes back again. Of course, there are many sides to corruption, but everybody seems now to agree that it is a very important problem. In fact, there was a Gallop Poll recently in which people were asked what the number one problem in today’s world is. You would think it would be climate change or overpopulation, but it turned out the majority said “corruption.” So, it’s a problem that is affecting us deeply.

There are so many different types of corruption, but the official definition is “a misuse of public trust for private means.” And this need not be by state officials; it could be also by CEOs, or by managers of non-governmental organizations, or by a soccer referee for that matter. It is always the misuse of public trust for private means, which of course takes many different forms; for instance, you have something called pork barreling, which is a wonderful expression in the United States, or embezzlement of funds, and so on.

I am mostly interested in the effect of bribery upon the judiciary system. If the trust in contracts breaks down, then the economy breaks down, because trust is at the root of the economy. There are staggering statistics which illustrate that the economic welfare of a state is closely related to the corruption perception index. Every year there are statistics about corruption published by organizations such as Transparency International or other such non-governmental organizations. It is truly astonishing how close this gradient between the different countries on the corruption level aligns with the gradient in welfare, in household income and things like this.

The paralyzing effect of this type of corruption upon the economy is something that is extremely interesting. Lots of economists are now turning their interest to that, which is new. In the 1970s, there was a Nobel Prize-winning economist, Gunnar Myrdal, who said that corruption is practically taboo as a research topic among economists. This has well changed in the decades since. It has become a very interesting topic for law students, for students of economy, sociology, and historians, of course, because corruption has always been with us. This is now a booming field, and I would like to approach this with evolutionary game theory.

Evolutionary game theory has a long tradition, and I have witnessed its development practically from the beginning. Some of the most important pioneers were Robert Axelrod and John Maynard Smith. In particular, Axelrod who in the late ‘70s wrote a truly seminal book called The Evolution of Cooperation, which iterated the prisoner’s dilemma. He showed that there is a way out of the social dilemma, which is based on reciprocity. This surprised economists, particularly, game theoreticians. He showed that by viewing social dilemmas in the context of a population where people learn from each other, where the social learning imitates whatever type of behavior is currently the best, you can place it into a context where cooperative strategies, like tit for tat, based on reciprocation can evolve….(More)”.

Governing artificial intelligence: ethical, legal, and technical opportunities and challenges


Introduction to the Special Issue of the Philosophical Transactions of the Royal Society by Sandra Wachter, Brent Mittelstadt, Luciano Floridi and Corinne Cath: “Artificial intelligence (AI) increasingly permeates every aspect of our society, from the critical, like urban infrastructure, law enforcement, banking, healthcare and humanitarian aid, to the mundane like dating. AI, including embodied AI in robotics and techniques like machine learning, can improve economic, social welfare and the exercise of human rights. Owing to the proliferation of AI in high-risk areas, the pressure is mounting to design and govern AI to be accountable, fair and transparent. How can this be achieved and through which frameworks? This is one of the central questions addressed in this special issue, in which eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems. It also gives a brief overview of recent developments in AI governance, how much of the agenda for defining AI regulation, ethical frameworks and technical approaches is set, as well as providing some concrete suggestions to further the debate on AI governance…(More)”.

How to Identify Almost Anyone in a Consumer Gene Database


Paul Raeburn at Scientific American: “Researchers are becoming so adept at mining information from genealogical, medical and police genetic databases that it is becoming difficult to protect anyone’s privacy—even those who have never submitted their DNA for analysis.

In one of two separate studies published October 11, researchers report that by testing the 1.28 million samples contained in a consumer gene database, they could match 60 percent of the DNA of the 140 million Americans of European descent to a third cousin or closer relative. That figure, they say in the study published in Science, will soon rise to nearly 100 percent as the number of samples rises in such consumer databases as AncestryDNA and 23andMe.

In the second study, in the journal Cell, a different research group show that police databases—once thought to be made of meaningless DNA useful only for matching suspects with crime scene samples—can be cross-linked with genetic databases to connect individuals to their genetic information. “Both of these papers show you how deeply you can reach into a family and a population,” says Erin Murphy, a professor of law at New York University School of Law. Consumers who decide to share DNA with a consumer database are providing information on their parents, children, third cousins they don’t know about—and even a trace that could point to children who don’t exist yet, she says….(More)”.

A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI


Paper by Sandra Wachter and Brent Mittelstadt: “Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Concerns about algorithmic accountability are often actually concerns about the way in which these technologies draw privacy invasive and non-verifiable inferences about us that we cannot predict, understand, or refute.

Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The broad concept of personal datain Europe could be interpreted to include inferences, predictions, and assumptions that refer to or impact on an individual. If seen as personal data, individuals are granted numerous rights under data protection law. However, the legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice.

As we show in this paper, individuals are granted little control and oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Art 13-15), rectify (Art 16), delete (Art 17), object to (Art 21), or port (Art 20) personal data are significantly curtailed when it comes to inferences, often requiring a greater balance with controller’s interests (e.g. trade secrets, intellectual property) than would otherwise be the case. Similarly, the GDPR provides insufficient protection against sensitive inferences (Art 9) or remedies to challenge inferences or important decisions based on them (Art 22(3))….

In this paper we argue that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed ‘high risk inferences’ , meaning inferences that are privacy invasive or reputation damaging and have low verifiability in the sense of being predictive or opinion-based. In cases where algorithms draw ‘high risk inferences’ about individuals, this right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data is a relevant basis to draw inferences; (2) why these inferences are relevant for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged. A right to reasonable inferences must, however, be reconciled with EU jurisprudence and counterbalanced with IP and trade secrets law as well as freedom of expression and Article 16 of the EU Charter of Fundamental Rights: the freedom to conduct a business….(More)”.

Human Rights in the Big Data World


Paper by Francis Kuriakose and Deepa Iyer: “Ethical approach to human rights conceives and evaluates law through the underlying value concerns. This paper examines human rights after the introduction of big data using an ethical approach to rights. First, the central value concerns such as equity, equality, sustainability and security are derived from the history of digital technological revolution. Then, the properties and characteristics of big data are analyzed to understand emerging value concerns such as accountability, transparency, tracability, explainability and disprovability.

Using these value points, this paper argues that big data calls for two types of evaluations regarding human rights. The first is the reassessment of existing human rights in the digital sphere predominantly through right to equality and right to work. The second is the conceptualization of new digital rights such as right to privacy and right against propensity-based discrimination. The paper concludes that as we increasingly share the world with intelligence systems, these new values expand and modify the existing human rights paradigm….(More)”.