Accountability of AI Under the Law: The Role of Explanation


Paper by Finale Doshi-Velez and Mason Kortz: “The ubiquity of systems using artificial intelligence or “AI” has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before—applications range from clinical decision support to autonomous driving and predictive policing. That said, our AIs continue to lag in common sense reasoning [McCarthy, 1960], and thus there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014]. How can we take advantage of what AI systems have to offer, while also holding them accountable?

In this work, we focus on one tool: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017a], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems.

Below, we briefly review current societal, moral, and legal norms around explanation, and then focus on the different contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exist important consistencies: when demanding explanation from humans, what we typically want to know is whether and how certain input factors affected the final decision or outcome.

These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should generally be technically feasible but may sometimes be practically onerous—there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard….(More)”

From #Resistance to #Reimagining governance


Stefaan G. Verhulst in Open Democracy: “…There is no doubt that #Resistance (and its associated movements) holds genuine transformative potential. But for the change it brings to be meaningful (and positive), we need to ask the question: What kind of government do we really want?

Working to maintain the status quo or simply returning to, for instance, a pre-Trump reality cannot provide for the change we need to counter the decline in trust, the rise of populism and the complex social, economic and cultural problems we face. We need a clear articulation of alternatives.  Without such an articulation, there is a danger of a certain hollowness and dispersion of energies. The call for #Resistance requires a more concrete –and ultimately more productive – program that is concerned not just with rejecting or tearing down, but with building up new institutions and governance processes. What’s needed, in short, is not simply #Resistance.

Below, I suggest six shifts that can help us reimagine governance for the twenty-first century. Several of these shifts are enabled by recent technological changes (e.g., the advent of big data, blockchain and collective intelligence) as well as other emerging methods such as design thinking, behavioral economics, and agile development.

Some of the shifts I suggest have been experimented with, but they have often been developed in an ad hoc manner without a full understanding of how they could make a more systemic impact. Part of the purpose of this paper is to begin the process of a more systematic enquiry; the following amounts to a preliminary outline or blueprint for reimagined governance for the twenty-first century.

Screen Shot 2017-12-14 at 1.21.29 PM

  • Shift 1: from gatekeeper to platform…
  • Shift 2: from inward to user-and-problem orientation…
  • Shift 3: from closed to open…
  • Shift 4: from deliberation to collaboration and co-creation…
  • Shift 5: from ideology to evidence-based…
  • Shift 6: from centralized to distributed… (More)

Building the Smarter State: The Role of Data Labs


New set of case studies by The GovLab: “Government at all levels — federal, state and local — collects and processes troves of data in order to administer public programs, fulfill regulatory mandates or conduct research¹. This government-held data, which often contains personally identifiable information about the individuals government serves is known as “administrative data” and it can be analyzed to evaluate and improve how government and the social sector deliver services.

For example, the Social Security Administration (SSA) collects and manages data on social, welfare and disability benefit payments of nearly the entire US population as well as data such as individual lifetime records of wages and self employment earnings. The SSA uses this administrative data for, among other things, analysis of policy interventions and to develop models to project demographic and economic characteristics of the population. State governments collect computerized hospital discharge data for both Government (Medicare and Medicaid) and commercial payers while the Department of Justice (through the Bureau of Justice Standards) collects prison admission and release data to monitor correctional populations and to address several policy questions, including those on recidivism and prisoner reentry.

Though they have long collected data, increasingly in digital form, government agencies have struggled to create the infrastructure and acquire the skills needed to make use of this administrative data to realize the promise of evidence-based policymaking.

The goal of this collection of eight case studies is to look at how governments are beginning to “get smarter” about using their own data. By comparing the ways in which they have chosen to collaborate with researchers and make often sensitive data usable to government employees and researchers in ethical and responsible ways, we hope to increase our understanding of what is required to be able to make better use of administrative data including the governance structures, technology infrastructure and key personnel. The hope is to enable other public institutions to know what is required to be able to make better use of administrative data. What follows is a summary of the learnings from those case studies. We start with an articulation of the value proposition for greater use of administrative data followed by the key learnings and the case studies themselves….(More)”

Read the case studies here.

Code and Clay, Data and Dirt: Five Thousand Years of Urban Media


Book by Shannon Mattern: “For years, pundits have trumpeted the earthshattering changes that big data and smart networks will soon bring to our cities. But what if cities have long been built for intelligence, maybe for millennia? In Code and Clay, Data and Dirt Shannon Mattern advances the provocative argument that our urban spaces have been “smart” and mediated for thousands of years.

Offering powerful new ways of thinking about our cities, Code and Clay, Data and Dirt goes far beyond the standard historical concepts of origins, development, revolutions, and the accomplishments of an elite few. Mattern shows that in their architecture, laws, street layouts, and civic knowledge—and through technologies including the telephone, telegraph, radio, printing, writing, and even the human voice—cities have long negotiated a rich exchange between analog and digital, code and clay, data and dirt, ether and ore.

Mattern’s vivid prose takes readers through a historically and geographically broad range of stories, scenes, and locations, synthesizing a new narrative for our urban spaces. Taking media archaeology to the city’s streets, Code and Clay, Data and Dirt reveals new ways to write our urban, media, and cultural histories….(More)”.

Business Models For Sustainable Research Data Repositories


OECD Report: “In 2007, the OECD Principles and Guidelines for Access to Research Data from Public Funding were published and in the intervening period there has been an increasing emphasis on open science. At the same time, the quantity and breadth of research data has massively expanded. So called “Big Data” is no longer limited to areas such as particle physics and astronomy, but is ubiquitous across almost all fields of research. This is generating exciting new opportunities, but also challenges.

The promise of open research data is that they will not only accelerate scientific discovery and improve reproducibility, but they will also speed up innovation and improve citizen engagement with research. In short, they will benefit society as a whole. However, for the benefits of open science and open research data to be realised, these data need to be carefully and sustainably managed so that they can be understood and used by both present and future generations of researchers.

Data repositories – based in local and national research institutions and international bodies – are where the long-term stewardship of research data takes place and hence they are the foundation of open science. Yet good data stewardship is costly and research budgets are limited. So, the development of sustainable business models for research data repositories needs to be a high priority in all countries. Surprisingly, perhaps, little systematic analysis has been done on income streams, costs, value propositions, and business models for data repositories, and that is the gap this report attempts to address, from a science policy perspective…..

This project was designed to take up the challenge and to contribute to a better understanding of how research data repositories are funded, and what developments are occurring in their funding. Central questions included:

  • How are data repositories currently funded, and what are the key revenue sources?
  • What innovative revenue sources are available to data repositories?
  • How do revenue sources fit together into sustainable business models?
  • What incentives for, and means of, optimising costs are available?
  • What revenue sources and business models are most acceptable to key stakeholders?…(More)”

Crowdsourcing Accurately and Robustly Predicts Supreme Court Decisions


Paper by Katz, Daniel Martin and Bommarito, Michael James and Blackman, Josh: “Scholars have increasingly investigated “crowdsourcing” as an alternative to expert-based judgment or purely data-driven approaches to predicting the future. Under certain conditions, scholars have found that crowd-sourcing can outperform these other approaches. However, despite interest in the topic and a series of successful use cases, relatively few studies have applied empirical model thinking to evaluate the accuracy and robustness of crowdsourcing in real-world contexts.

In this paper, we offer three novel contributions. First, we explore a dataset of over 600,000 predictions from over 7,000 participants in a multi-year tournament to predict the decisions of the Supreme Court of the United States. Second, we develop a comprehensive crowd construction framework that allows for the formal description and application of crowdsourcing to real-world data. Third, we apply this framework to our data to construct more than 275,000 crowd models. We find that in out-of-sample historical simulations, crowdsourcing robustly outperforms the commonly-accepted null model, yielding the highest-known performance for this context at 80.8% case level accuracy. To our knowledge, this dataset and analysis represent one of the largest explorations of recurring human prediction to date, and our results provide additional empirical support for the use of crowdsourcing as a prediction method….(More)”.

There’s more to evidence-based policies than data: why it matters for healthcare


 at The Conversation: “The big question is: how can countries strengthen their health systems to deliver accessible, affordable and equitable care when they are often under-financed and governed in complex ways?

One answer lies in governments developing policies and programmes that are informed by evidence of what works or doesn’t. This should include what we would call “traditional data”, but should also include a broader definition of evidence. This would mean including, for example, information from citizens and stakeholders as well as programme evaluations. In this way, policies can be made more relevant for the people they affect.

Globally there is an increasing appreciation for this sort of policymaking that relies of a broader definition of evidence. Countries such as South Africa, Ghana and Thailand provide good examples.

What is evidence?

Using evidence to inform the development of health care has grown out of the use of science to choose the best decisions. It is based on data being collected in a methodical way. This approach is useful but it can’t always be neatly applied to policymaking. There are several reasons for this.

The first is that there are many different types of evidence. Evidence is more than data, even though the terms are often used to mean the same thing. For example, there is statistical and administrative data, research evidence, citizen and stakeholder information as well as programme evaluations.

The challenge is that some of these are valued more than others. More often than not, statistical data is more valued in policymaking. But both researchers and policymakers must acknowledge that for policies to be sound and comprehensive, different phases of policymaking process would require different types of evidence.

Secondly, data-as-evidence is only one input into policymaking. Policymakers face a long list of pressures they must respond to, including time, resources, political obligations and unplanned events.

Researchers may push technically excellent solutions designed in research environments. But policymakers may have other priorities in mind: are the solutions being put to them practical and affordable?Policymakers also face the limitations of having to balance various constituents while straddling the constraints of the bureaucracies they work in.

Researchers must recognise that policymakers themselves are a source of evidence of what works or doesn’t. They are able to draw on their own experiences, those of their constituents, history and their contextual knowledge of the terrain.

What this boils down to is that for policies that are based on evidence to be effective, fewer ‘push/pull’ models of evidence need to be used. Instead the models where evidence is jointly fashioned should be employed.

This means that policymakers, researchers and other key actors (like health managers or communities) must come together as soon as a problem is identified. They must first understand each other’s ideas of evidence and come to a joint conclusion of what evidence would be appropriate for the solution.

In South Africa, for example, the Department of Environmental Affairshas developed a four-phase process to policymaking. In the first phase, researchers and policymakers come together to set the agenda and agree on the needed solution. Their joint decision is then reviewed before research is undertaken and interpreted together….(More)”.

Big data in social and psychological science: theoretical and methodological issues


Paper by Lin Qiu, Sarah Hian May Chan and David Chan in the Journal of Computational Social Science: “Big data presents unprecedented opportunities to understand human behavior on a large scale. It has been increasingly used in social and psychological research to reveal individual differences and group dynamics. There are a few theoretical and methodological challenges in big data research that require attention. In this paper, we highlight four issues, namely data-driven versus theory-driven approaches, measurement validity, multi-level longitudinal analysis, and data integration. They represent common problems that social scientists often face in using big data. We present examples of these problems and propose possible solutions….(More)”.

Disrupting Democracy: Point. Click. Transform.


Book edited by Anthony T. Silberfeld: “In January 2017, the Bertelsmann Foundation embarked on a nine-month journey to explore how digital innovation impacts democracies and societies around the world. This voyage included more than 40,000 miles in the air, thousands of miles on the ground and hundreds of interviews.

From the rival capitals of Washington and Havana to the bustling streets of New Delhi; the dynamic tech startups in Tel Aviv to the efficient order of Berlin, this book focuses on key challenges that have emerged as a result of technological disruption and offers potential lessons to other nations situated at various points along the technological and democratic spectra.

Divided into six chapters, this book provides two perspectives on each of our five case studies (India, Cuba, the United States, Israel and Germany) followed by polling data collected on demographics, digital access and political engagement from four of these countries.

The global political environment is constantly evolving, and it is clear that technology is accelerating that process for better and, in some cases, for worse. Disrupting Democracy attempts to sort through these changes to give policymakers and citizens information that will help them navigate this increasingly volatile world….(More)”.

A New City O/S: The Power of Open, Collaborative, and Distributed Governance


Book by Stephen Goldsmith and Neil Kleiman: “At a time when trust is dropping precipitously and American government at the national level has fallen into a state of long-term, partisan-based gridlock, local government can still be effective—indeed more effective and even more responsive to the needs of its citizens. Based on decades of direct experience and years studying successful models around the world, the authors of this intriguing book propose a new operating system (O/S) for cities. Former mayor and Harvard professor Stephen Goldsmith and New York University professor Neil Kleiman suggest building on the giant leaps that have been made in technology, social engagement, and big data.

Calling their approach “distributed governance,” Goldsmith and Kleiman offer a model that allows public officials to mobilize new resources, surface ideas from unconventional sources, and arm employees with the information they need to become pre-emptive problem solvers. This book highlights lessons from the many innovations taking place in today’s cities to show how a new O/S can create systemic transformation.

For students of government, A New City O/S: The Power of Distributed Governance presents a groundbreaking strategy for rethinking the governance of cities, marking an important evolution of the current bureaucratic authority-based model dating from the 1920s. More important, the book is designed for practitioners, starting with public-sector executives, managers, and frontline workers. By weaving real-life examples into a coherent model, the authors have created a step-by-step guide for all those who would put the needs of citizens front and center. Nothing will do more to restore trust in government than solutions that work. A New City O/S: The Power of Distributed Governanceputs those solutions within reach of those public officials responsible for their delivery….(More)”.