Cops, Docs, and Code: A Dialogue between Big Data in Health Care and Predictive Policing


Paper by I. Glenn Cohen and Harry Graver: “Big data” has become the ubiquitous watchword of this decade. Predictive analytics, which is something we want to do with big data — to use of electronic algorithms to forecast future events in real time. Predictive analytics is interfacing with the law in a myriad of settings: how votes are counted and voter rolls revised, the targeting of taxpayers for auditing, the selection of travelers for more intensive searching, pharmacovigilance, the creation of new drugs and diagnostics, etc.

In this paper, written for the symposium “Future Proofing the Law,” we want to engage in a bit of legal arbitrage; that is, we want to examine which insights from legal analysis of predictive analytics in better-trodden ground — predictive policing — can be useful for understanding relatively newer ground for legal scholars — the use of predictive analytics in health care. To the degree lessons can be learned from this dialogue, we think they go in both directions….(More)”.

New York City moves to create accountability for algorithms


Lauren Kirchner at ArsTechnica: “The algorithms that play increasingly central roles in our lives often emanate from Silicon Valley, but the effort to hold them accountable may have another epicenter: New York City. Last week, the New York City Council unanimously passed a bill to tackle algorithmic discrimination—the first measure of its kind in the country.

The algorithmic accountability bill, waiting to be signed into law by Mayor Bill de Blasio, establishes a task force that will study how city agencies use algorithms to make decisions that affect New Yorkers’ lives, and whether any of the systems appear to discriminate against people based on age, race, religion, gender, sexual orientation, or citizenship status. The task force’s report will also explore how to make these decision-making processes understandable to the public.

The bill’s sponsor, Council Member James Vacca, said he was inspired by ProPublica’s investigation into racially biased algorithms used to assess the criminal risk of defendants….

A previous, more sweeping version of the bill had mandated that city agencies publish the source code of all algorithms being used for “targeting services” or “imposing penalties upon persons or policing” and to make them available for “self-testing” by the public. At a hearing at City Hall in October, representatives from the mayor’s office expressed concerns that this mandate would threaten New Yorkers’ privacy and the government’s cybersecurity.

The bill was one of two moves the City Council made last week concerning algorithms. On Thursday, the committees on health and public safety held a hearing on the city’s forensic methods, including controversial tools that the chief medical examiner’s office crime lab has used for difficult-to-analyze samples of DNA.

As a ProPublica/New York Times investigation detailed in September, an algorithm created by the lab for complex DNA samples has been called into question by scientific experts and former crime lab employees.

The software, called the Forensic Statistical Tool, or FST, has never been adopted by any other lab in the country….(More)”.

The whys of social exclusion : insights from behavioral economics


Paper by Karla Hoff and James Sonam Walsh: “All over the world, people are prevented from participating fully in society through mechanisms that go beyond the structural and institutional barriers identified by rational choice theory (poverty, exclusion by law or force, taste-based and statistical discrimination, and externalities from social networks).

This essay discusses four additional mechanisms that bounded rationality can explain: (i) implicit discrimination, (ii) self-stereotyping and self-censorship, (iii) “fast thinking” adapted to underclass neighborhoods, and (iv)”adaptive preferences” in which an oppressed group views its oppression as natural or even preferred.

Stable institutions have cognitive foundations — concepts, categories, social identities, and worldviews — that function like lenses through which individuals see themselves and the world. Abolishing or reforming a discriminatory institution may have little effect on these lenses. Groups previously discriminated against by law or policy may remain excluded through habits of the mind. Behavioral economics recognizes forces of social exclusion left out of rational choice theory, and identifies ways to overcome them. Some interventions have had very consequential impact….(More)”.

Accountability of AI Under the Law: The Role of Explanation


Paper by Finale Doshi-Velez and Mason Kortz: “The ubiquity of systems using artificial intelligence or “AI” has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before—applications range from clinical decision support to autonomous driving and predictive policing. That said, our AIs continue to lag in common sense reasoning [McCarthy, 1960], and thus there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014]. How can we take advantage of what AI systems have to offer, while also holding them accountable?

In this work, we focus on one tool: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017a], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems.

Below, we briefly review current societal, moral, and legal norms around explanation, and then focus on the different contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exist important consistencies: when demanding explanation from humans, what we typically want to know is whether and how certain input factors affected the final decision or outcome.

These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should generally be technically feasible but may sometimes be practically onerous—there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard….(More)”

Democracy in the digital age: digital agora or dystopia


Paper by Peter Parycek, Bettina Rinnerbauer, and Judith Schossböck in the International Journal of Electronic Governance: “Information and communication technologies (ICTs) affect democracy and the rule of law. Digitalisation has been perceived as a stimulus towards a more participative society or as support to decision making, but not without criticism. Authors draw on a legal review, case studies and quantitative survey data about citizens’ view on transparency and participation in the German-speaking region to summarise selected discourses of democratisation via ICTs and the dominant critique. The paper concludes with an outlook on contemporary questions of digital democracy between the dialectic of protecting citizens’ rights and citizen control. It is proposed that prospective e-participation projects will concentrate on processes of innovation and creativity as opposed to participation rates. Future investigations should evaluate the contexts in which a more data-driven, automated form of decision making could be supported and collect indicators for where to draw the line between the protection and control of citizens, including research on specific tools…(More).

What Are Data? A Categorization of the Data Sensitivity Spectrum


Paper by John M.M. Rumbold and Barbara K. Pierscionek in Big Data Research: “The definition of data might at first glance seem prosaic, but formulating a definitive and useful definition is surprisingly difficult. This question is important because of the protection given to data in law and ethics. Healthcare data are universally considered sensitive (and confidential), so it might seem that the categorisation of less sensitive data is relatively unimportant for medical data research. This paper will explore the arguments that this is not necessarily the case and the relevance of recognizing this.

The categorization of data and information requires re-evaluation in the age of Big Data in order to ensure that the appropriate protections are given to different types of data. The aggregation of large amounts of data requires an assessment of the harms and benefits that pertain to large datasets linked together, rather than simply assessing each datum or dataset in isolation. Big Data produce new data via inferences, and this must be recognized in ethical assessments. We propose a schema for a granular assessment of data categories. The use of schemata such as this will assist decision-making by providing research ethics committees and information governance bodies with guidance about the relative sensitivities of data. This will ensure that appropriate and proportionate safeguards are provided for data research subjects and reduce inconsistency in decision making…(More)”.

Mixed Messages? The Limits of Automated Social Media Content Analysis


CDT Paper by Natasha Duarte, Emma Llanso and Anna Loup: “Governments and companies are turning to automated tools to make sense of what people post on social media, for everything ranging from hate speech detection to law enforcement investigations. Policymakers routinely call for social media companies to identify and take down hate speech, terrorist propaganda, harassment, “fake news” or disinformation, and other forms of problematic speech. Other policy proposals have focused on mining social media to inform law enforcement and immigration decisions. But these proposals wrongly assume that automated technology can accomplish on a large scale the kind of nuanced analysis that humans can accomplish on a small scale.

This paper explains the capabilities and limitations of tools for analyzing the text of social media posts and other online content. It is intended to help policymakers understand and evaluate available tools and the potential consequences of using them to carry out government policies. This paper focuses specifically on the use of natural language processing (NLP) tools for analyzing the text of social media posts. We explain five limitations of these tools that caution against relying on them to decide who gets to speak, who gets admitted into the country, and other critical determinations. This paper concludes with recommendations for policymakers and developers, including a set of questions to guide policymakers’ evaluation of available tools….(More)”.

Sharing is Daring: An Experiment on Consent, Chilling Effects and a Salient Privacy Nudge


Hermstrüwer, Yoan and Dickert, Stephan at the International Review of Law and Economics: “Privacy law rests on the assumption that government surveillance may increase the general level of conformity and thus generate a chilling effect. In a study that combines elements of a lab and a field experiment, we show that salient and incentivized consent options are sufficient to trigger this behavioral effect. Salient ex ante consent options may lure people into giving up their privacy and increase their compliance with social norms – even when the only immediate risk of sharing information is mere publicity on a Google website. A right to be forgotten (right to deletion), however, seems to reduce neither privacy valuations nor chilling effects. In spite of low deletion costs people tend to stick with a retention default. The study suggests that consent architectures may play out on social conformity rather than on consent choices and privacy valuations. Salient notice and consent options may not merely empower users to make an informed consent decision. Instead, they can trigger the very effects that privacy law intends to curb….(More)”.

The world watches Reykjavik’s digital democracy experiment


Joshua Jacobs at the Financial Times: “When Iceland’s banks collapsed and mistrust of politicians soared during the 2008 financial crisis, two programmers thought software could help salvage the country’s democracy. They created Your Priorities, a platform that allows citizens to suggest laws, policies and budget measures, which can then be voted up or down by other users. “

We thought: If we manage somehow to connect regular citizens with government then we create a dialogue that will ultimately result in better decisions,” says Robert Bjarnason, chief executive of Citizens Foundation, the company that created Your Priorities. Mr Bjarnason and his fellow co-founder of Citizens Foundation, Gunnar Grimsson, used the software to create a policy website called Better Reykjavik just before the city’s 2010 elections.

Jon Gnarr, Reykjavik’s then mayor, encouraged people to use the platform to give him policy suggestions and he committed to funding the top 10 ideas each month. Seven years on, Better Reykjavik has some 20,000 users and 769 of their ideas have been approved by the city council. These include increasing financial support for the city’s homeless, converting a former power station into a youth centre, introducing gender-neutral toilets and naming a street after Darth Vader, the character from Star Wars.

Your Priorities has also been tested in other countries, including Estonia, Australia, Scotland, Wales, Norway and Malta. In Estonia, seven proposals have become law, including one limiting donations from companies to political parties and another that requires the national parliament to debate any proposal with more than 1,000 votes.

The software is part of a global trend for people to seek more influence over their politicians. In Australia, for example, the MiVote app allows people to vote on issues being debated in parliament.

…At times, the portal can become a “crazy sounding board” Mr Svansson concedes. The Reykjavik council has put in quality controls to filter out hare-brained proposals, although Mr Bjarnason says he has had to remove inappropriate content only a handful of times….During Iceland’s parliamentary elections last month, 10 out of 11 political parties published their election pitches on Your Priorities, allowing voters to comment on policies and propose new ones. This interactive manifesto website attracted 22,000 visitors.

Testing the efficacy of platforms such as Your Priorities is perhaps easier in Reykjavik — population 123,000 — than in larger cities. Even so, integrating the site into the council’s policymaking apparatus has been slower than Mr Bjarnason had foreseen. “Everything takes a long time and sometimes it is like you are swimming in syrup,” he says. “Still, it has been a really good experience working with the city.”…(More).

Use of the websites of parliaments to promote citizen deliberation in the process of public decision-making. Comparative study of ten countries


Santiago Giraldo Luquet in Communications and Society: “This study develops a longitudinal research (2010-2015) on 10 countries – 5 European countries (France, United Kingdom, Sweden, Italy and Spain) and 5 American countries (Argentina, Ecuador, Chile, Colombia and the USA). The aim is to compare how the parliaments use its official websites in order to promote the political participation process in the citizenship. The study focuses on the deliberation axe (Macintosh, 2004, Hagen, 2000, Vedel, 2003, 2007) and in the way that representative institutions define a digital strategy to create an online public sphere.

Starting with the recognition of Web 2.0 as a debate sphere and as a place of reconfiguration of the traditional –and utopian- Greek Agora, the study adopts the ‘deliberate’ political action axe to evaluate, qualitatively and quantitatively -using a content analysis methodology- the use of the Web 2.0 tools made by the legislative bodies of the analysed countries. The article shows how, which and what parliaments use Web 2.0 tools – integrated in their web page – as a scenario that allows deliberation at the different legislative processes that integrate the examined political systems. Finally, the comparative results show the main differences and similarities between the countries, as well as a tendency to reduce deliberation tools offering by representative institutions in the countries sampled…(More)”.