When the Rule of Law Is Not Working


A conversation with Karl Sigmund at Edge: “…Now, I’m getting back to evolutionary game theory, the theory of evolution of cooperation and the social contract, and how the social contract can be subverted by corruption. That’s what interests me most currently. Of course, that is not a new story. I believe it explains a lot of what I see happening in my field and in related fields. The ideas that survive are the ideas that are fruitful in the sense of quickly producing a lot of publications, and that’s not necessarily correlated with these ideas being important to advancing science.

Corruption is a wicked problem, wicked in the technical sense of sociology, and it’s not something that will go away. You can reduce it, but as soon as you stop your efforts, it comes back again. Of course, there are many sides to corruption, but everybody seems now to agree that it is a very important problem. In fact, there was a Gallop Poll recently in which people were asked what the number one problem in today’s world is. You would think it would be climate change or overpopulation, but it turned out the majority said “corruption.” So, it’s a problem that is affecting us deeply.

There are so many different types of corruption, but the official definition is “a misuse of public trust for private means.” And this need not be by state officials; it could be also by CEOs, or by managers of non-governmental organizations, or by a soccer referee for that matter. It is always the misuse of public trust for private means, which of course takes many different forms; for instance, you have something called pork barreling, which is a wonderful expression in the United States, or embezzlement of funds, and so on.

I am mostly interested in the effect of bribery upon the judiciary system. If the trust in contracts breaks down, then the economy breaks down, because trust is at the root of the economy. There are staggering statistics which illustrate that the economic welfare of a state is closely related to the corruption perception index. Every year there are statistics about corruption published by organizations such as Transparency International or other such non-governmental organizations. It is truly astonishing how close this gradient between the different countries on the corruption level aligns with the gradient in welfare, in household income and things like this.

The paralyzing effect of this type of corruption upon the economy is something that is extremely interesting. Lots of economists are now turning their interest to that, which is new. In the 1970s, there was a Nobel Prize-winning economist, Gunnar Myrdal, who said that corruption is practically taboo as a research topic among economists. This has well changed in the decades since. It has become a very interesting topic for law students, for students of economy, sociology, and historians, of course, because corruption has always been with us. This is now a booming field, and I would like to approach this with evolutionary game theory.

Evolutionary game theory has a long tradition, and I have witnessed its development practically from the beginning. Some of the most important pioneers were Robert Axelrod and John Maynard Smith. In particular, Axelrod who in the late ‘70s wrote a truly seminal book called The Evolution of Cooperation, which iterated the prisoner’s dilemma. He showed that there is a way out of the social dilemma, which is based on reciprocity. This surprised economists, particularly, game theoreticians. He showed that by viewing social dilemmas in the context of a population where people learn from each other, where the social learning imitates whatever type of behavior is currently the best, you can place it into a context where cooperative strategies, like tit for tat, based on reciprocation can evolve….(More)”.

Governing artificial intelligence: ethical, legal, and technical opportunities and challenges


Introduction to the Special Issue of the Philosophical Transactions of the Royal Society by Sandra Wachter, Brent Mittelstadt, Luciano Floridi and Corinne Cath: “Artificial intelligence (AI) increasingly permeates every aspect of our society, from the critical, like urban infrastructure, law enforcement, banking, healthcare and humanitarian aid, to the mundane like dating. AI, including embodied AI in robotics and techniques like machine learning, can improve economic, social welfare and the exercise of human rights. Owing to the proliferation of AI in high-risk areas, the pressure is mounting to design and govern AI to be accountable, fair and transparent. How can this be achieved and through which frameworks? This is one of the central questions addressed in this special issue, in which eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems. It also gives a brief overview of recent developments in AI governance, how much of the agenda for defining AI regulation, ethical frameworks and technical approaches is set, as well as providing some concrete suggestions to further the debate on AI governance…(More)”.

How to Identify Almost Anyone in a Consumer Gene Database


Paul Raeburn at Scientific American: “Researchers are becoming so adept at mining information from genealogical, medical and police genetic databases that it is becoming difficult to protect anyone’s privacy—even those who have never submitted their DNA for analysis.

In one of two separate studies published October 11, researchers report that by testing the 1.28 million samples contained in a consumer gene database, they could match 60 percent of the DNA of the 140 million Americans of European descent to a third cousin or closer relative. That figure, they say in the study published in Science, will soon rise to nearly 100 percent as the number of samples rises in such consumer databases as AncestryDNA and 23andMe.

In the second study, in the journal Cell, a different research group show that police databases—once thought to be made of meaningless DNA useful only for matching suspects with crime scene samples—can be cross-linked with genetic databases to connect individuals to their genetic information. “Both of these papers show you how deeply you can reach into a family and a population,” says Erin Murphy, a professor of law at New York University School of Law. Consumers who decide to share DNA with a consumer database are providing information on their parents, children, third cousins they don’t know about—and even a trace that could point to children who don’t exist yet, she says….(More)”.

A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI


Paper by Sandra Wachter and Brent Mittelstadt: “Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Concerns about algorithmic accountability are often actually concerns about the way in which these technologies draw privacy invasive and non-verifiable inferences about us that we cannot predict, understand, or refute.

Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The broad concept of personal datain Europe could be interpreted to include inferences, predictions, and assumptions that refer to or impact on an individual. If seen as personal data, individuals are granted numerous rights under data protection law. However, the legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice.

As we show in this paper, individuals are granted little control and oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Art 13-15), rectify (Art 16), delete (Art 17), object to (Art 21), or port (Art 20) personal data are significantly curtailed when it comes to inferences, often requiring a greater balance with controller’s interests (e.g. trade secrets, intellectual property) than would otherwise be the case. Similarly, the GDPR provides insufficient protection against sensitive inferences (Art 9) or remedies to challenge inferences or important decisions based on them (Art 22(3))….

In this paper we argue that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed ‘high risk inferences’ , meaning inferences that are privacy invasive or reputation damaging and have low verifiability in the sense of being predictive or opinion-based. In cases where algorithms draw ‘high risk inferences’ about individuals, this right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data is a relevant basis to draw inferences; (2) why these inferences are relevant for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged. A right to reasonable inferences must, however, be reconciled with EU jurisprudence and counterbalanced with IP and trade secrets law as well as freedom of expression and Article 16 of the EU Charter of Fundamental Rights: the freedom to conduct a business….(More)”.

Human Rights in the Big Data World


Paper by Francis Kuriakose and Deepa Iyer: “Ethical approach to human rights conceives and evaluates law through the underlying value concerns. This paper examines human rights after the introduction of big data using an ethical approach to rights. First, the central value concerns such as equity, equality, sustainability and security are derived from the history of digital technological revolution. Then, the properties and characteristics of big data are analyzed to understand emerging value concerns such as accountability, transparency, tracability, explainability and disprovability.

Using these value points, this paper argues that big data calls for two types of evaluations regarding human rights. The first is the reassessment of existing human rights in the digital sphere predominantly through right to equality and right to work. The second is the conceptualization of new digital rights such as right to privacy and right against propensity-based discrimination. The paper concludes that as we increasingly share the world with intelligence systems, these new values expand and modify the existing human rights paradigm….(More)”.

The law and ethics of big data analytics: A new role for international human rights in the search for global standards


David Nersessian at Business Horizons: “The Economist recently declared that digital information has overtaken oil as the world’s most valuable commodity. Big data technology is inherently global and borderless, yet little international consensus exists over what standards should govern its use. One source of global standards benefitting from considerable international consensus might be used to fill the gap: international human rights law.

This article considers the extent to which international human rights law operates as a legal or ethical constraint on global commercial use of big data technologies. By providing clear baseline standards that apply worldwide, human rights can help shape cultural norms—implemented as ethical practices and global policies and procedures—about what businesses should do with their information technologies. In this way, human rights could play a broad and important role in shaping business thinking about the proper handling of this increasingly valuable commodity in the modern global society…(More)”.

The latest tools for sexual assault victims: Smartphone apps and software


 
Peter Holley at the Washington Post:  “…For much of the past decade, dozens of apps and websites have been created to help survivors of sexual assault electronically record and report such crimes. They are designed to assist an enormous pool of potential victims. The Rape Abuse & Incest National Network reports that more than 11 percent of all college students — both graduate and undergraduate — experience rape or sexual assault through physical force, violence or incapacitation. Despite the prevalence of such incidents, less than 10 percent of victims on college campuses report their assaults, according to the National Sexual Violence Resource Center.

The apps range from electronic reporting tools such as JDoe to legal guides that provide victims with access to law enforcement and crisis counseling. Others help victims save and share relevant medical information in case of an assault. The app Uask includes a “panic button” that connects users with 911 or allows them to send emergency messages to people with their location.

 

Since its debut in 2015, Callisto’s software has been adopted by 12 college campuses — including Stanford, the University of Oregon and St. John’s University — and made available to more than 160,000 students, according to the company. Sexual assault survivors who visit Callisto are six times as likely to report, and 15 percent of those survivors have matched with another victim of the same assailant, the company claims.

Peter Cappelli, a professor of management at the Wharton School and director of Wharton’s Center for Human Resources, told NPR that he sees potential problems with survivors “crowdsourcing” their decision to report assaults.

“I don’t think we want to have a standard where the decisions are crowdsourced,” he said. “I think what you want is to tell people [that] the criteria [for whether or not to report] are policy related, not personally related, and you should bring forward anything that fits the criteria, not [based on] whether you feel enough other people have made the complaint or not. We want to sometimes encourage people to do things they might feel uncomfortable about.”…(More)”.

Creative Placemaking and Community Safety: Synthesizing Cross-Cutting Themes


Mark Treskon, Sino Esthappan, Cameron Okeke and Carla Vasquez-Noriega at the Urban Institute: “This report synthesizes findings from four cases where stakeholders are using creative placemaking to improve community safety. It presents cross-cutting themes from these case studies to show how creative placemaking techniques can be used from the conception and design stage through construction and programming, and how they can build community safety by promoting empathy and understanding, influencing law and policy, providing career opportunities, supporting well-being, and advancing the quality of place. It also discusses implementation challenges, and presents evaluative techniques of particular relevance for stakeholders working to understand the effects of these programs….(More)”.

Digital Deceit II: A Policy Agenda to Fight Disinformation on the Internet


We have developed here a broad policy framework to address the digital threat to democracy, building upon basic principles to recommend a set of specific proposals.

Transparency: As citizens, we have the right to know who is trying to influence our political views and how they are doing it. We must have explicit disclosure about the operation of dominant digital media platforms — including:

  • Real-time and archived information about targeted political advertising;
  • Clear accountability for the social impact of automated decision-making;
  • Explicit indicators for the presence of non-human accounts in digital media.

Privacy: As individuals with the right to personal autonomy, we must be given more control over how our data is collected, used, and monetized — especially when it comes to sensitive information that shapes political decision-making. A baseline data privacy law must include:

  • Consumer control over data through stronger rights to access and removal;
  • Transparency for the user of the full extent of data usage and meaningful consent;
  • Stronger enforcement with resources and authority for agency rule-making.

Competition: As consumers, we must have meaningful options to find, send and receive information over digital media. The rise of dominant digital platforms demonstrates how market structure influences social and political outcomes. A new competition policy agenda should include:

  • Stronger oversight of mergers and acquisitions;
  • Antitrust reform including new enforcement regimes, levies, and essential services regulation;
  • Robust data portability and interoperability between services.

There are no single-solution approaches to the problem of digital disinformation that are likely to change outcomes. … Awareness and education are the first steps toward organizing and action to build a new social contract for digital democracy….(More)”

The role of corporations in addressing AI’s ethical dilemmas


Darrell M. West at Brookings: “In this paper, I examine five AI ethical dilemmas: weapons and military-related applications, law and border enforcement, government surveillance, issues of racial bias, and social credit systems. I discuss how technology companies are handling these issues and the importance of having principles and processes for addressing these concerns. I close by noting ways to strengthen ethics in AI-related corporate decisions.

Briefly, I argue it is important for firms to undertake several steps in order to ensure that AI ethics are taken seriously:

  1. Hire ethicists who work with corporate decisionmakers and software developers
  2. Develop a code of AI ethics that lays out how various issues will be handled
  3. Have an AI review board that regularly addresses corporate ethical questions
  4. Develop AI audit trails that show how various coding decisions have been made
  5. Implement AI training programs so staff operationalizes ethical considerations in their daily work, and
  6. Provide a means for remediation when AI solutions inflict harm or damages on people or organizations….(More)”.