Slave to the Algorithm? Why a ‘Right to Explanation’ is Probably Not the Remedy You are Looking for


Paper by Lilian Edwards and Michael Veale: “Algorithms, particularly of the machine learning (ML) variety, are increasingly consequential to individuals’ lives but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively presents as a means to “open the black box”, hence allowing individual challenge and redress, as well as possibilities to foster accountability of ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive.

However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as “meaningful information about the logic of processing” — is unlikely to be provided by the kind of ML “explanations” computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However (section 5) “subject-centric” explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers’ worries of IP or trade secrets disclosure.

As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy”. However, in our final section, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure (“right to be forgotten”) and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds of building a better, more respectful and more user-friendly algorithmic society….(More)”

The law and big data


Article by Felin, Teppo, Devins, Caryn, Kauffman, Stuart and Koppl, Roger: “In this article we critically examine the use of Big Data in the legal system. Big Data is driving a trend towards behavioral optimization and “personalized law,” in which legal decisions and rules are optimized for best outcomes and where law is tailored to individual consumers based on analysis of past data. Big Data, however, has serious limitations and dangers when applied in the legal context. Advocates of Big Data make theoretically problematic assumptions about the objectivity of data and scientific observation. Law is always theory-laden. Although Big Data strives to be objective, law and data have multiple possible meanings and uses and thus require theory and interpretation in order to be applied. Further, the meanings and uses of law and data are indefinite and continually evolving in ways that cannot be captured or predicted by Big Data.

Due to these limitations, the use of Big Data will likely generate unintended consequences in the legal system. Large-scale use of Big Data will create distortions that adversely influence legal decision-making, causing irrational herding behaviors in the law. The centralized nature of the collection and application of Big Data also poses serious threats to legal evolution and democratic accountability. Furthermore, its focus on behavioral optimization necessarily restricts and even eliminates the local variation and heterogeneity that makes the legal system adaptive. In all, though Big Data has legitimate uses, this article cautions against using Big Data to replace independent legal judgment….(More)”

We use big data to sentence criminals. But can the algorithms really tell us what we need to know?


 at the Conversation: “In 2013, a man named Eric L. Loomis was sentenced for eluding police and driving a car without the owner’s consent.

When the judge weighed Loomis’ sentence, he considered an array of evidence, including the results of an automated risk assessment tool called COMPAS. Loomis’ COMPAS score indicated he was at a “high risk” of committing new crimes. Considering this prediction, the judge sentenced him to seven years.

Loomis challenged his sentence, arguing it was unfair to use the data-driven score against him. The U.S. Supreme Court now must consider whether to hear his case – and perhaps settle a nationwide debate over whether it’s appropriate for any court to use these tools when sentencing criminals.

Today, judges across the U.S. use risk assessment tools like COMPAS in sentencing decisions. In at least 10 states, these tools are a formal part of the sentencing process. Elsewhere, judges informally refer to them for guidance.

I have studied the legal and scientific bases for risk assessments. The more I investigate the tools, the more my caution about them grows.

The scientific reality is that these risk assessment tools cannot do what advocates claim. The algorithms cannot actually make predictions about future risk for the individual defendants being sentenced….

Algorithms such as COMPAS cannot make predictions about individual defendants, because data-driven risk tools are based on group statistics. This creates an issue that academics sometimes call the “group-to-individual” or G2i problem.

Scientists study groups. But the law sentences the individual. Consider the disconnect between science and the law here.

The algorithms in risk assessment tools commonly assign specific points to different factors. The points are totaled. The total is then often translated to a risk bin, such as low or high risk. Typically, more points means a higher risk of recidivism.

Say a score of 6 points out of 10 on a certain tool is considered “high risk.” In the historical groups studied, perhaps 50 percent of people with a score of 6 points did reoffend.

Thus, one might be inclined to think that a new offender who also scores 6 points is at a 50 percent risk of reoffending. But that would be incorrect.

It may be the case that half of those with a score of 6 in the historical groups studied would later reoffend. However, the tool is unable to select which of the offenders with 6 points will reoffend and which will go on to lead productive lives.

The studies of factors associated with reoffending are not causation studies. They can tell only which factors are correlated with new crimes. Individuals retain some measure of free will to decide to break the law again, or not.

These issues may explain why risk tools often have significant false positive rates. The predictions made by the most popular risk tools for violence and sex offending have been shown to get it wrong for some groups over 50 percent of the time.

A ProPublica investigation found that COMPAS, the tool used in Loomis’ case, is burdened by large error rates. For example, COMPAS failed to predict reoffending in one study at a 37 percent rate. The company that makes COMPAS has disputed the study’s methodology….

There are also a host of thorny issues with risk assessment tools incorporating, either directly or indirectly, sociodemographic variables, such as gender, race and social class. Law professor Anupam Chander has named it the problem of the “racist algorithm.”

Big data may have its allure. But, data-driven tools cannot make the individual predictions that sentencing decisions require. The Supreme Court might helpfully opine on these legal and scientific issues by deciding to hear the Loomis case…(More)”.

Mapping the invisible: Street View cars add air pollution sensors


Environment at Google: “There are 1.3 million miles of natural gas distribution pipelines in the U.S. These pipelines exist pretty much everywhere that people do, and when they leak, the escaping methane — the main ingredient in natural gas — is a potent greenhouse gas, with 84 times the short-term warming effect of carbon dioxide. These leaks can be time-consuming to identify and measure using existing technologies. Utilities are required by law to quickly fix any leaks that are deemed a safety threat, but thousands of others can — and often do — go on leaking for months or years.

To help gas utilities, regulators, and others understand the scale of the challenge and help prioritize the most cost-effective solutions, the Environmental Defense Fund (EDF) worked with Joe von Fischer, a scientist at Colorado State University, to develop technology to detect and measure methane concentrations from a moving vehicle. Initial tests were promising, and EDF decided to expand the effort to more locations.

That’s when the organization reached out to Google. The project needed to scale, and we had the infrastructure to make it happen: computing power, secure data storage, and, most important, a fleet of Street View cars. These vehicles, equipped with high-precision GPS, were already driving around pretty much everywhere, capturing 360-degree photos for Google Maps; maybe they could measure methane while they were at it. The hypothesis, says Karin Tuxen-Bettman of Google Earth Outreach, was that “we had the potential to turn our Street View fleet into an environmental sensing platform.”

Street View cars make at least 2 trips around a given area in order to capture good air quality data. An intake tube on the front bumper collects air samples, which are then processed by a methane analyzer in the trunk. Finally, the data is sent to the Google Cloud for analysis and integration into a map showing the size and location of methane leaks. Since the trial began in 2012, EDF has built methane maps for 11 cities and found more than 5,500 leaks. The results range from one leak for every mile driven (sorry, Bostonians) to one every 200 miles (congrats, Indianapolis, for replacing all those corrosive steel and iron pipes with plastic).

All of us can go on our smartphone and get the weather. But what if you could scroll down and see what the air quality is on the street where you’re walking?…

This promising start inspired the team to take the next step and explore using Street View cars to measure overall air quality. For years, Google has worked on measuring indoor environmental quality across company offices with Aclima, which builds environmental sensor networks. In 2014, we expanded the partnership to the outside world, equipping several more Street View cars with its ‘Environmental Intelligence’ (Ei) mobile platform, including scientific-grade analyzers and arrays of small-scale, low-cost sensors to measure pollutants, including particulate matter, NO2, CO2 black carbon, and more. The new project began with a pilot in Denver, and we’ll finish mapping cities in 3 regions of California by the end of 2016. And today the system is delivering reliable data that corresponds to the U.S. Environmental Protection Agency’s stationary measurement network….

The project began with a few cars, but Aclima’s mobile platform, which has already produced one of the world’s largest data sets on air quality, could also be expanded via deployment on vehicles like buses and mail trucks, on the way to creating a street-level pollution map. This hyper-local data could help people make more informed choices about things like when to let their kids play outside and which changes to advocate for to make their communities healthier….(More)”.

UK government watchdog examining political use of data analytics


“Given the big data revolution, it is understandable that political campaigns are exploring the potential of advanced data analysis tools to help win votes,” Elizabeth Denham, the information commissioner, writes on the ICO’s blog. However, “the public have the right to expect” that this takes place in accordance with existing data protection laws, she adds.

Political parties are able to use Facebook to target voters with different messages, tailoring the advert to recipients based on their demographic. In the 2015 UK general election, the Conservative party spent £1.2 million on Facebook campaigns and the Labour party £16,000. It is expected that Labour will vastly increase that spend for the general election on 8 June….

Political parties and third-party companies are allowed to collect data from sites like Facebook and Twitter that lets them tailor these ads to broadly target different demographics. However, if those ads target identifiable individuals, it runs afoul of the law….(More)”

How to increase public support for policy: understanding citizens’ perspectives


Peter van Wijck and Bert Niemeijer at LSE Blog: “To increase public support, it is essential to anticipate what reactions they will have to policy. But how to do that? Our framework combines insights from scenario planning and frame analysis. Scenario planning starts from the premise that we cannot predict the future. We can, however, imagine different plausible scenarios, different plausible future developments. Scenarios can be used to ask a ‘what if’ question. If a certain scenario were to develop, what policy measures would be required?  By the same token, scenarios may be used as test-conditions for policy-measures. Kees van der Heijden calls this ‘wind tunnelling’.

Frame-analysis is about how we interpret the world around us. Frames are mental structures that shape the way we see the world. Based on a frame, an individual perceives societal problems, attributes these problems to causes, and forms ideas on instruments to address the problems. Our central idea is that policy-makers may use citizens’ frames to reflect on their policy frame. Citizens’ frames may, in other words, be used to test conditions in a wind tunnel. The line of reasoning is summarized in the figure.

Policy frames versus citizens’ frames

policy framinng

The starting-points of the figure are the policy frame and the citizens’ frames. Arrow 1 and 2 indicate that citizens’ reactions depend on both frames. A citizen can be expected to respond positively in case of frame alignment. Negative responses can be expected if policy-makers do not address “the real problems”, do not attribute problems to “the real causes”, or do not select “adequate instruments”. If frames do not align, policy-makers are faced with the question of how to deal with it (arrow 3). First, they may reconsider the policy frame (arrow 4). That is, are there reasons to reconsider the definition of problems, the attribution to causes, and/or the selection of instruments? Such a “reframing” effectively amounts to the formulation of a new (or adjusted) policy-frame. Second, policy-makers may try to influence citizens’ frames (arrow 5). This may lead to a change in what citizens define as problems, what they consider to be the causes of problems and what they consider to be adequate instruments to deal with the problems.

Two cases: support for victims and confidence in the judiciary

To apply our framework in practice, we developed a three-step method. Firstly, we reconstruct the policy frame. Here we investigate what policy-makers see as social problems, what they assume to be the causes of these problems, and what they consider to be appropriate instruments to address these problems. Secondly, we reconstruct contrasting citizens’ frames. Here we use focus groups, where contrasting groups are selected based on a segmentation model. Finally, we engage in a “wind tunnelling exercise”. We present the citizens’ frames to policy-makers. And we ask them to reflect on the question of how the different groups can be expected to react on the policy measures selected by the policy-makers. In fact, this step is what Schön and Rein called “frame reflection”….(More)”.

Dubai Data Releases Findings of ‘The Dubai Data Economic Impact Report’


Press Release: “the ‘Dubai Data Economic Impact Report’…provides the Dubai Government with insights into the potential economic impacts of opening and sharing data and includes a methodology for more rigorous measurement of the economic impacts of open and shared data, to allow regular assessment of the actual impacts in the future.

The study estimates that the opening and sharing of government and private sector data will potentially add a total of 10.4 billion AED Gross Value Added (GVA) impact to Dubai’s economy annually by 2021. Opening government data alone will result in a GVA impact of 6.6 billion AED annually as of 2021. This is equivalent to approximately 0.8% to 1.2% of Dubai’s forecasted GDP for 2021. Transport, storage, and communications are set to be the highest contributor to this potential GVA of opening government data, accounting for (27.8% or AED1.85 bn) of the total amount, followed by public administration (23.6% or AED 1.57 bn); wholesale, retail, restaurants, and hotels (13.7% or AED 908 million); real estate (9.6% or AED 639 million); and professional services (8.9% or AED 588 million). Finance and insurance, meanwhile, is calculated to make up 6.5% (AED 433 million) of the GVA, while mining, manufacturing, and utilities (6% or AED 395 million); construction (3.5% or AED 230 million); and entertainment and arts (0.4% or AED27 million) account for the remaining proportion.

This economic impact will be realized through the publication, exchange, use and reuse of Dubai data. The Dubai Data Law of 2015 mandates that data providers publish open data and exchange shared data. It defines open data as any Dubai data which is published and can be downloaded, used and re-used without restrictions by all types of users, while shared data is the data that has been classified as either confidential, sensitive, or secret, and can only be accessed by other government entities or by other authorised persons. The law pertains to local government entities, federal government entities which have any data relating to the emirate, individuals and companies who produce, own, disseminate, or exchange any data relating to the emirate. It aims to realise Dubai’s vision of transforming itself into a smart city, manage Dubai Data in accordance with a clear and specific methodology that is consistent with international best practices, integrate the services provided by federal and local government entities, and optimise the use of the data available to data providers, among other objectives….

The study identifies several stakeholders  involved in the use and reuse of open and shared data. These stakeholders – some of whom are qualified as “data creators” – play an important role in the process of generating the economic impacts. They include: data enrichers, who combine open data with their own sources and/or knowledge; data enablers, who do not profit directly from the data, but do so via the platforms and technologies they are provided on; data developers, who design and build Application Programming Interfaces (APIs); and data aggregators, who collect and pool data, providing it to other stakeholders….(More)”

Updated N.Y.P.D. Anti-Crime System to Ask: ‘How We Doing?’


It was a policing invention with a futuristic sounding name — CompStat — when the New York Police Department introduced it as a management system for fighting crime in an era of much higher violence in the 1990s. Police departments around the country, and the world, adapted its system of mapping muggings, robberies and other crimes; measuring police activity; and holding local commanders accountable.

Now, a quarter-century later, it is getting a broad reimagining and being brought into the mobile age. Moving away from simple stats and figures, CompStat is getting touchy-feely. It’s going to ask New Yorkers — via thousands of questions on their phones — “How are you feeling?” and “How are we, the police, doing?”

Whether this new approach will be mimicked elsewhere is still unknown, but as is the case with almost all new tactics in the N.Y.P.D. — the largest municipal police force in the United States by far — it will be closely watched. Nor is it clear if New Yorkers will embrace this approach, reject it as intrusive or simply be annoyed by it.

The system, using location technology, sends out short sets of questions to smartphones along three themes: Do you feel safe in your neighborhood? Do you trust the police? Are you confident in the New York Police Department?

The questions stream out every day, around the clock, on 50,000 different smartphone applications and present themselves on screens as eight-second surveys.

The department believes it will get a more diverse measure of community satisfaction, and allow it to further drive down crime. For now, Police Commissioner James P. O’Neill is calling the tool a “sentiment meter,” though he is open to suggestions for a better name….(More)”.

Going Digital: Restoring Trust In Government In Latin American Cities


Carlos Santiso at The Rockefeller Foundation Blog: “Driven by fast-paced technological innovations, an exponential growth of smartphones, and a daily stream of big data, the “digital revolution” is changing the way we live our lives. Nowhere are the changes more sweeping than in cities. In Latin America, almost 80 percent of the population lives in cities, where massive adoption of social media is enabling new forms of digital engagement. Technology is ubiquitous in cities. The expectations of Latin American “digital citizens” have grown exponentially as a result of a rising middle class and an increasingly connected youth.

This digital transformation is recasting the relation between states and citizens. Digital citizens are asking for better services, more transparency, and meaningful participation. Their rising expectations concern the quality of the services city governments ought to provide, but also the standards of integrity, responsiveness, and fairness of the bureaucracy in their daily dealings. A recent study shows that citizens’ satisfaction with public services is not only determined by the objective quality of the service, but also their subjective expectations and how fairly they consider being treated….

New technologies and data analytics are transforming the governance of cities. Digital-intensive and data-driven innovations are changing how city governments function and deliver services, and also enabling new forms of social participation and co-creation. New technologies help improve efficiency and further transparency through new modes of open innovation. Tech-enabled and citizen-driven innovations also facilitate participation through feedback loops from citizens to local authorities to identify and resolve failures in the delivery of public services.

Three structural trends are driving the digital revolution in governments.

  1. The digital transformation of the machinery of government. National and city governments in the region are developing digital strategies to increase connectivity, improve services, and enhance accountability. According to a recent report, 75 percent of the 23 countries surveyed have developed comprehensive digital strategies, such as Uruguay Digital, Colombia’s Vive Digital or Mexico’s Agenda Digital, that include legally recognized digital identification mechanisms. “Smart cities” are intensifying the use of modern technologies and improve the interoperability of government systems, the backbone of government, to ensure that public services are inter-connected and thus avoid having citizens provide the same information to different entities. An important driver of this transformation is citizens’ demands for greater transparency and accountability in the delivery of public services. Sixteen countries in the region have developed open government strategies, and cities such as Buenos Aires in Argentina, La Libertad in Peru, and Sao Paolo in Brazil have also committed to opening up government to public scrutiny and new forms of social participation. This second wave of active transparency reforms follows a first, more passive wave that focused on facilitating access to information.
  1. The digital transformation of the interface with citizens. Sixty percent of the countries surveyed by the aforementioned report have established integrated service portals through which citizens can access online public services. Online portals allow for a single point of access to public services. Cities, such as Bogotá and Rio de Janeiro, are developing their own online service platforms to access municipal services. These innovations improve access to public services and contribute to simplifying bureaucratic processes and cutting red-tape, as a recent study shows. Governments are resorting to crowdsourcing solutions, open intelligence initiatives, and digital apps to encourage active citizen participation in the improvement of public services and the prevention of corruption. Colombia’s Transparency Secretariat has developed an app that allows citizens to report “white elephants” — incomplete or overbilled public works. By the end of 2015, it identified 83 such white elephants, mainly in the capital Bogotá, for a total value of almost $500 million, which led to the initiation of criminal proceedings by law enforcement authorities. While many of these initiatives emerge from civic initiatives, local governments are increasingly encouraging them and adopting their own open innovation models to rethink public services.
  1. The gradual mainstreaming of social innovation in local government. Governments are increasingly resorting to public innovation labs to tackle difficult problems for citizens and businesses. Governments innovation labs are helping address “wicked problems” by combining design thinking, crowdsourcing techniques, and data analytics tools. Chile, Colombia, Mexico, Brazil, and Uruguay, have developed such social innovation labs within government structures. As a recent report notes, these mechanisms come in different forms and shapes. Large cities, such as Buenos Aires, Mexico City, Quito, Rio de Janeiro, and Montevideo, are at the forefront of testing such laboratory mechanisms and institutionalizing tech-driven and citizen-centered approaches through innovation labs. For example, in 2013, Mexico City created its Laboratorio para la Ciudad, as a hub for civic innovation and urban creativity, relying on small-case experiments and interventions to improve specific government services and make local government more transparent, responsive, and receptive. It spearheaded an open government law for the city that encourages residents to participate in the design of public policies and requires city agencies to consider those suggestions…..(More)”.

Human Agency and Behavioral Economics: Nudging Fast and Slow


Book by Cass R. Sunstein: “This Palgrave Pivot offers comprehensive evidence about what people actually think of “nudge” policies designed to steer decision makers’ choices in positive directions. The data reveal that people in diverse nations generally favor nudges by strong majorities, with a preference for educative efforts – such as calorie labels – that equip individuals to make the best decisions for their own lives. On the other hand, there are significant arguments for noneducational nudges – such as automatic enrollment in savings plans – as they allow people to devote their scarce time and attention to their most pressing concerns.  The decision to use either educative or noneducative nudges raises fundamental questions about human freedom in both theory and practice. Sunstein’s findings and analysis offer lessons for those involved in law and policy who are choosing which method to support as the most effective way to encourage lifestyle changes….(More)”.