Trustworthy Privacy Indicators: Grades, Labels, Certifications and Dashboards


Paper by Joel R. Reidenberg et al: “Despite numerous groups’ efforts to score, grade, label, and rate the privacy of websites, apps, and network-connected devices, these attempts at privacy indicators have, thus far, not been widely adopted. Privacy policies, however, remain long, complex, and impractical for consumers. Communicating in some short-hand form, synthesized privacy content is now crucial to empower internet users and provide them more meaningful notice, as well as nudge consumers and data processors toward more meaningful privacy. Indeed, on the basis of these needs, the National Institute of Standards and Technology and the Federal Trade Commission in the United States, as well as lawmakers and policymakers in the European Union, have advocated for the development of privacy indicator systems.

Efforts to develop privacy grades, scores, labels, icons, certifications, seals, and dashboards have wrestled with various deficiencies and obstacles for the wide-scale deployment as meaningful and trustworthy privacy indicators. This paper seeks to identify and explain these deficiencies and obstacles that have hampered past and current attempts. With these lessons, the article then offers criteria that will need to be established in law and policy for trustworthy indicators to be successfully deployed and adopted through technological tools. The lack of standardization prevents user-recognizability and dependability in the online marketplace, diminishes the ability to create automated tools for privacy, and reduces incentives for consumers and industry to invest in a privacy indicators. Flawed methods in selection and weighting of privacy evaluation criteria and issues interpreting language that is often ambiguous and vague jeopardize success and reliability when baked into an indicator of privacy protectiveness or invasiveness. Likewise, indicators fall short when those organizations rating or certifying the privacy practices are not objective, trustworthy, and sustainable.

Nonetheless, trustworthy privacy rating systems that are meaningful, accurate, and adoptable can be developed to assure effective and enduring empowerment of consumers. This paper proposes a framework using examples from prior and current attempts to create privacy indicator systems in order to provide a valuable resource for present-day, real world policymaking….(More)”.

Mapping the challenges and opportunities of artificial intelligence for the conduct of diplomacy


DiploFoundation: “This report provides an overview of the evolution of diplomacy in the context of artificial intelligence (AI). AI has emerged as a very hot topic on the international agenda impacting numerous aspects of our political, social, and economic lives. It is clear that AI will remain a permanent feature of international debates and will continue to shape societies and international relations.

It is impossible to ignore the challenges – and opportunities – AI is bringing to the diplomatic realm. Its relevance as a topic for diplomats and others working in international relations will only increase….(More)”.

A Behavioral Economics Approach to Digitalisation


Paper by Dirk Beerbaum and Julia M. Puaschunder: “A growing body of academic research in the field of behavioural economics, political science and psychology demonstrate how an invisible hand can nudge people’s decisions towards a preferred option. Contrary to the assumptions of the neoclassical economics, supporters of nudging argue that people have problems coping with a complex world, because of their limited knowledge and their restricted rationality. Technological improvement in the age of information has increased the possibilities to control the innocent social media users or penalise private investors and reap the benefits of their existence in hidden persuasion and discrimination. Nudging enables nudgers to plunder the simple uneducated and uninformed citizen and investor, who is neither aware of the nudging strategies nor able to oversee the tactics used by the nudgers (Puaschunder 2017a, b; 2018a, b).

The nudgers are thereby legally protected by democratically assigned positions they hold. The law of motion of the nudging societies holds an unequal concentration of power of those who have access to compiled data and coding rules, relevant for political power and influencing the investor’s decision usefulness (Puaschunder 2017a, b; 2018a, b). This paper takes as a case the “transparency technology XBRL (eXtensible Business Reporting Language)” (Sunstein 2013, 20), which should make data more accessible as well as usable for private investors. It is part of the choice architecture on regulation by governments (Sunstein 2013). However, XBRL is bounded to a taxonomy (Piechocki and Felden 2007).

Considering theoretical literature and field research, a representation issue (Beerbaum, Piechocki and Weber 2017) for principles-based accounting taxonomies exists, which intelligent machines applying Artificial Intelligence (AI) (Mwilu, Prat and Comyn-Wattiau 2015) nudge to facilitate decision usefulness. This paper conceptualizes ethical questions arising from the taxonomy engineering based on machine learning systems: Should the objective of the coding rule be to support or to influence human decision making or rational artificiality? This paper therefore advocates for a democratisation of information, education and transparency about nudges and coding rules (Puaschunder 2017a, b; 2018a, b)…(More)”.

The Inevitability of AI Law & Policy: Preparing Government for the Era of Autonomous Machines


Public Knowledge: “Today, we’re happy to announce our newest white paper, “The Inevitability of AI Law & Policy: Preparing Government for the Era of Autonomous Machines,” by Public Knowledge General Counsel Ryan Clough. The paper argues that the rapid and pervasive rise of artificial intelligence risks exploiting the most marginalized and vulnerable in our society. To mitigate these harms, Clough advocates for a new federal authority to help the U.S. government implement fair and equitable AI. Such an authority should provide the rest of the government with the expertise and experience needed to achieve five goals crucial to building ethical AI systems:

  • Boosting sector-specific regulators and confronting overarching policy challenges raised by AI;
  • Protecting public values in government procurement and implementation of AI;
  • Attracting AI practitioners to civil service, and building durable and centralized AI expertise within government;
  • Identifying major gaps in the laws and regulatory frameworks that govern AI; and
  • Coordinating strategies and priorities for international AI governance.

“Any individual can be misjudged and mistreated by artificial intelligence,” Clough explains, “but the record to date indicates that it is significantly more likely to happen to the less powerful, who also have less recourse to do anything about it.” The paper argues that a new federal authority is the best way to meet the profound and novel challenges AI poses for us all….(More)”.

Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security


Paper by Robert Chesney and Danielle Keats Citron: “Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection.

Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors. While deep-fake technology will bring with it certain benefits, it also will introduce many harms. The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well.

Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it. We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments. We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions….(More)”.

Babbage among the insurers: big 19th-century data and the public interest.


Wilson, D. C. S.  at the History of the Human Sciences: “This article examines life assurance and the politics of ‘big data’ in mid-19th-century Britain. The datasets generated by life assurance companies were vast archives of information about human longevity. Actuaries distilled these archives into mortality tables – immensely valuable tools for predicting mortality and so pricing risk. The status of the mortality table was ambiguous, being both a public and a private object: often computed from company records they could also be extrapolated from quasi-public projects such as the Census or clerical records. Life assurance more generally straddled the line between private enterprise and collective endeavour, though its advocates stressed the public interest in its success. Reforming actuaries such as Thomas Rowe Edmonds wanted the data on which mortality tables were based to be made publicly available, but faced resistance. Such resistance undermined insurers’ claims to be scientific in spirit and hindered Edmonds’s personal quest for a law of mortality. Edmonds pushed instead for an open actuarial science alongside fellow-travellers at the Statistical Society of London, which was populated by statisticians such as William Farr (whose subsequent work, it is argued, was influenced by Edmonds) as well as by radical mathematicians such as Charles Babbage. The article explores Babbage’s little-known foray into the world of insurance, both as a budding actuary but also as a fierce critic of the industry. These debates over the construction, ownership, and accessibility of insurance datasets show that concern about the politics of big data did not begin in the 21st century….(More)”.

The Administrative State


Interview with Paul Tucker by Benedict King: “Iyour book, you place what you call the “administrative state” at the heart of the political dilemmas facing the liberal political order. Could you tell us what you mean by ‘the administrative state’ and the dilemmas it poses for democratic societies?

This is about the legitimacy of the structure of government that has developed in Western democracies. The ‘administrative state’ is simply the machinery for implementing policy and law. What matters is that much of it—including many regulators and central banks—is no longer under direct ministerial control. They are not part of a ministerial department. They are not answerable day-to-day, minute-by-minute to the prime minister or, in other countries, the president.

When I was young, in Europe at least, these arm’s-length agencies were a small part of government, but now they are a very big part. Over here, that transformation has come about over the past thirty-odd years, since the 1990s, whereas in America it goes back to the 1880s, and especially the 1930s New Deal reforms of President Roosevelt.

“The ‘administrative state’ is simply the machinery for implementing policy and law. ”

In the United Kingdom we used to call these agencies ‘quangos’, but that acronym trivialises the issue. Today, many—in the US, probably most—of the laws to which businesses and even individuals are subject are written and enforced by regulatory agencies, part of the administrative state, rather than passed by Parliament or Congress and enforced by the elected executive. That would surprise John Locke, Montesquieu and James Madison, who developed the principles associated with the separation of powers and constitutionalism.

To some extent, these changes were driven by a perceived need to turn to ‘expertise’. But the effect has been to shift more of our government away from our elected representatives and to unelected technocrats. An underlying premise at the heart of my book (although not something that I can prove) is that, since any and every part of government eventually fails—and may fail very badly, as we saw with the collapse of the financial system in 2008—there is a risk that people will get fed up with this shift to governance by unelected experts. The people will get fed up with their lives being affected so much by people who they didn’t have a chance to vote for and can’t vote out. If that happened, it would be dangerous as the genius of representative democracy is that it separates how we as citizens feel about the system of government from how we feel about the government of the day. So how can we avoid that without losing the benefits of delegation? That is what the debate about the administrative state is ultimately about: its dryness belies its importance to how we govern ourselves.

“The genius of representative democracy is that it separates how we as citizens feel about the system of government from how we feel about the government of the day”

It matters, therefore, that the array of agencies in the administrative state varies enormously in the degree to which they are formally insulated from politics. My book Unelected Power is about ‘independent agencies’, by which I mean an agency that is insulated day-to-day from both the legislative branch (Parliament or Congress) and also from the executive branch of government (the prime minister or president). Central banks are the most important example of such independent agencies in modern times, wielding a wide range of monetary and regulatory powers….(More + selection of five books to read)”.

Political Lawyering for the 21st Century


Paper by Deborah N. Archer: “Legal education purports to prepare the next generation of lawyers capable of tackling the urgent and complex social justice challenges of our time. But law schools are failing in that public promise. Clinical education offers the best opportunity to overcome those failings by teaching the skills lawyers need to tackle systemic and interlocking legal and social problems. But too often even clinical education falls short: it adheres to conventional pedagogical methodologies that are overly narrow and, in the end, limit students’ abilities to manage today’s complex racial and social justice issues. This article contends that clinical education needs to embrace and reimagine political lawyering for the 21st century in order to prepare aspiring lawyers to tackle both new and chronic issues of injustice through a broad array of advocacy strategies….(More)”.

When the Rule of Law Is Not Working


A conversation with Karl Sigmund at Edge: “…Now, I’m getting back to evolutionary game theory, the theory of evolution of cooperation and the social contract, and how the social contract can be subverted by corruption. That’s what interests me most currently. Of course, that is not a new story. I believe it explains a lot of what I see happening in my field and in related fields. The ideas that survive are the ideas that are fruitful in the sense of quickly producing a lot of publications, and that’s not necessarily correlated with these ideas being important to advancing science.

Corruption is a wicked problem, wicked in the technical sense of sociology, and it’s not something that will go away. You can reduce it, but as soon as you stop your efforts, it comes back again. Of course, there are many sides to corruption, but everybody seems now to agree that it is a very important problem. In fact, there was a Gallop Poll recently in which people were asked what the number one problem in today’s world is. You would think it would be climate change or overpopulation, but it turned out the majority said “corruption.” So, it’s a problem that is affecting us deeply.

There are so many different types of corruption, but the official definition is “a misuse of public trust for private means.” And this need not be by state officials; it could be also by CEOs, or by managers of non-governmental organizations, or by a soccer referee for that matter. It is always the misuse of public trust for private means, which of course takes many different forms; for instance, you have something called pork barreling, which is a wonderful expression in the United States, or embezzlement of funds, and so on.

I am mostly interested in the effect of bribery upon the judiciary system. If the trust in contracts breaks down, then the economy breaks down, because trust is at the root of the economy. There are staggering statistics which illustrate that the economic welfare of a state is closely related to the corruption perception index. Every year there are statistics about corruption published by organizations such as Transparency International or other such non-governmental organizations. It is truly astonishing how close this gradient between the different countries on the corruption level aligns with the gradient in welfare, in household income and things like this.

The paralyzing effect of this type of corruption upon the economy is something that is extremely interesting. Lots of economists are now turning their interest to that, which is new. In the 1970s, there was a Nobel Prize-winning economist, Gunnar Myrdal, who said that corruption is practically taboo as a research topic among economists. This has well changed in the decades since. It has become a very interesting topic for law students, for students of economy, sociology, and historians, of course, because corruption has always been with us. This is now a booming field, and I would like to approach this with evolutionary game theory.

Evolutionary game theory has a long tradition, and I have witnessed its development practically from the beginning. Some of the most important pioneers were Robert Axelrod and John Maynard Smith. In particular, Axelrod who in the late ‘70s wrote a truly seminal book called The Evolution of Cooperation, which iterated the prisoner’s dilemma. He showed that there is a way out of the social dilemma, which is based on reciprocity. This surprised economists, particularly, game theoreticians. He showed that by viewing social dilemmas in the context of a population where people learn from each other, where the social learning imitates whatever type of behavior is currently the best, you can place it into a context where cooperative strategies, like tit for tat, based on reciprocation can evolve….(More)”.

Governing artificial intelligence: ethical, legal, and technical opportunities and challenges


Introduction to the Special Issue of the Philosophical Transactions of the Royal Society by Sandra Wachter, Brent Mittelstadt, Luciano Floridi and Corinne Cath: “Artificial intelligence (AI) increasingly permeates every aspect of our society, from the critical, like urban infrastructure, law enforcement, banking, healthcare and humanitarian aid, to the mundane like dating. AI, including embodied AI in robotics and techniques like machine learning, can improve economic, social welfare and the exercise of human rights. Owing to the proliferation of AI in high-risk areas, the pressure is mounting to design and govern AI to be accountable, fair and transparent. How can this be achieved and through which frameworks? This is one of the central questions addressed in this special issue, in which eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems. It also gives a brief overview of recent developments in AI governance, how much of the agenda for defining AI regulation, ethical frameworks and technical approaches is set, as well as providing some concrete suggestions to further the debate on AI governance…(More)”.