Watchdog to launch inquiry into misuse of data in politics


, and Alice Gibbs in The Guardian: “The UK’s privacy watchdog is launching an inquiry into how voters’ personal data is being captured and exploited in political campaigns, cited as a key factor in both the Brexit and Trump victories last year.

The intervention by the Information Commissioner’s Office (ICO) follows revelations in last week’s Observer that a technology company part-owned by a US billionaire played a key role in the campaign to persuade Britons to vote to leave the European Union.

It comes as privacy campaigners, lawyers, politicians and technology experts express fears that electoral laws are not keeping up with the pace of technological change.

“We are conducting a wide assessment of the data-protection risks arising from the use of data analytics, including for political purposes, and will be contacting a range of organisations,” an ICO spokeswoman confirmed. “We intend to publicise our findings later this year.”

The ICO spokeswoman confirmed that it had approached Cambridge Analytica over its apparent use of data following the story in the Observer. “We have concerns about Cambridge Analytica’s reported use of personal data and we are in contact with the organisation,” she said….

In the US, companies are free to use third-party data without seeking consent. But Gavin Millar QC, of Matrix Chambers, said this was not the case in Europe. “The position in law is exactly the same as when people would go canvassing from door to door,” Millar said. “They have to say who they are, and if you don’t want to talk to them you can shut the door in their face.That’s the same principle behind the data protection act. It’s why if telephone canvassers ring you, they have to say that whole long speech. You have to identify yourself explicitly.”…

Dr Simon Moores, visiting lecturer in the applied sciences and computing department at Canterbury Christ Church University and a technology ambassador under the Blair government, said the ICO’s decision to shine a light on the use of big data in politics was timely.

“A rapid convergence in the data mining, algorithmic and granular analytics capabilities of companies like Cambridge Analytica and Facebook is creating powerful, unregulated and opaque ‘intelligence platforms’. In turn, these can have enormous influence to affect what we learn, how we feel, and how we vote. The algorithms they may produce are frequently hidden from scrutiny and we see only the results of any insights they might choose to publish.” …(More)”

Americans have lost faith in institutions. That’s not because of Trump or ‘fake news.’


Bill Bishop in the Washington Post: “…Trust in American institutions, however, has been in decline for some time. Trump is merely feeding on that sentiment.

The leaders of once-powerful institutions are desperate to resurrect the faith of the people they serve. They act like they have misplaced a credit card and must find the number so that a replacement can be ordered and then FedEx-ed, if possible overnight.

But that delivery truck is never coming. The decline in trust isn’t because of what the press (or politicians or scientists) did or didn’t do. Americans didn’t lose their trust because of some particular event or scandal. And trust can’t be regained with a new app or even an outbreak of competence. To believe so is to misunderstand what was lost.

In 1964, 3 out of 4 Americans trusted their government to do the right thing most of the time. By 1976, that number had dropped to 33 percent. It was a decline that political scientist Walter Dean Burnham described as “among the largest ever recorded in opinion surveys.”…

Everything about modern life works against community and trust. Globalization and urbanization put people in touch with the different and the novel. Our economy rewards initiative over conformity, so that the weight of convention and tradition doesn’t squelch the latest gizmo from coming to the attention of the next Bill Gates. Whereas parents in the 1920s said it was most important for their children to be obedient, that quality has declined in importance, replaced by a desire for independence and autonomy. Widespread education gives people the tools to make up their own minds. And technology offers everyone the chance to be one’s own reporter, broadcaster and commentator.

We have become, in Polish sociologist Zygmunt Bauman’s description, “artists of our own lives,” ignoring authorities and booting traditions while turning power over to the self. The shift in outlook has been all-encompassing. It has changed the purpose of marriage (once a practical arrangement, now a means of personal fulfillment). It has altered the relationship between citizens and the state (an all-volunteer fighting force replacing the military draft). It has transformed the understanding of art (craftsmanship and assessment are out; free-range creativity and self-promotion are in). It has even inverted the orders of humanity and divinity (instead of obeying a god, now we choose one).

People enjoy their freedoms. There’s no clamoring for a return to gray flannel suits and deferential housewives. Constant social retooling and choice come with costs, however. Without the authority and guidance of institutions to help order their lives, many people feel overwhelmed and adrift. “Depression is truly our modern illness,” writes French sociologist Alain Ehrenberg, with rates 20 to 30 times what they were just two generations ago.

Sustained collective action has also become more difficult. Institutions are turning to behavioral “nudges,” hoping to move an increasingly suspicious public to do what once could be accomplished by command or law. As groups based on tradition and consistent association dwindle, they are being replaced by “event communities,” temporary gatherings that come and go without long-term commitment (think Burning Man). The protests spawned by Trump’s election are more about passion than organization and focus. Today’s demonstrations are sometimes compared to civil-rights-era marches, but they have more in common with L.A.’s Sunset Strip riots of 1966, when more than 1,000 young people gathered to object to a 10 p.m. curfew. “There’s something happening here,” goes the Buffalo Springfield song “For What It’s Worth,” commemorating the riots. “What it is ain’t exactly clear.” In our new politics, expression is a purpose itself….(More)”.

Fighting Illegal Fishing With Big Data


Emily Matchar in Smithsonian: “In many ways, the ocean is the Wild West. The distances are vast, the law enforcement agents few and far between, and the legal jurisdiction often unclear. In this environment, illegal activity flourishes. Illegal fishing is so common that experts estimate as much as a third of fish sold in the U.S. was fished illegally. This illegal fishing decimates the ocean’s already dwindling fish populations and gives rise to modern slavery, where fishermen are tricked onto vessels and forced to work, sometimes for years.

A new use of data technology aims to help curb these abuses by shining a light on the high seas. The technology uses ships’ satellite signals to detect instances of transshipment, when two vessels meet at sea to exchange cargo. As transshipment is a major way illegally caught fish makes it into the legal supply chain, tracking it could potentially help stop the practice.

“[Transshipment] really allows people to do something out of sight,” says David Kroodsma, the research program director at Global Fishing Watch, an online data platform launched by Google in partnership with the nonprofits Oceana and SkyTruth. “It’s something that obscures supply chains. It’s basically being able to do things without any oversight. And that’s a problem when you’re using a shared resource like the oceans.”

Global Fishing Watch analyzed some 21 billion satellite signals broadcast by ships, which are required to carry transceivers for collision avoidance, from between 2012 and 2016. It then used an artificial intelligence system it created to identify which ships were refrigerated cargo vessels (known in the industry as “reefers”). They then verified this information with fishery registries and other sources, eventually identifying 794 reefers—90 percent of the world’s total number of such vessels. They tracked instances where a reefer and a fishing vessel were moving at similar speeds in close proximity, labeling these instances as “likely transshipments,” and also traced instances where reefers were traveling in a way that indicated a rendezvous with a fishing vessel, even if no fishing vessel was present—fishing vessels often turn off their satellite systems when they don’t want to be seen. All in all there were more than 90,000 likely or potential transshipments recorded.

Even if these encounters were in fact transshipments, they would not all have been for nefarious purposes. They may have taken place to refuel or load up on supplies. But looking at the patterns of where the potential transshipments happen is revealing. Very few are seen close to the coasts of the U.S., Canada and much of Europe, all places with tight fishery regulations. There are hotspots off the coast of Peru and Argentina, all over Africa, and off the coast of Russia. Some 40 percent of encounters happen in international waters, far enough off the coast that no country has jurisdiction.

The tracked reefers were flying flags from some 40 different countries. But that doesn’t necessarily tell us much about where they really come from. Nearly half of the reefers tracked were flying “flags of convenience,” meaning they’re registered in countries other than where the ship’s owners are from to take advantage of those countries’ lax regulations….(More)”

Read more: http://www.smithsonianmag.com/innovation/fighting-illegal-fishing-big-data-180962321/#7eCwGrGS5v5gWjFz.99
Give the gift of Smithsonian magazine for only $12! http://bit.ly/1cGUiGv
Follow us: @SmithsonianMag on Twitter

Human Decisions and Machine Predictions


NBER Working Paper by Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainatha: “We examine how machine learning can be used to improve and understand human decision-making. In particular, we focus on a decision that has important policy consequences. Millions of times each year, judges must decide where defendants will await trial—at home or in jail. By law, this decision hinges on the judge’s prediction of what the defendant would do if released. This is a promising machine learning application because it is a concrete prediction task for which there is a large volume of data available. Yet comparing the algorithm to the judge proves complicated. First, the data are themselves generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the single variable that the algorithm focuses on; for instance, judges may care about racial inequities or about specific crimes (such as violent crimes) rather than just overall crime risk. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: a policy simulation shows crime can be reduced by up to 24.8% with no change in jailing rates, or jail populations can be reduced by 42.0% with no increase in crime rates. Moreover, we see reductions in all categories of crime, including violent ones. Importantly, such gains can be had while also significantly reducing the percentage of African-Americans and Hispanics in jail. We find similar results in a national dataset as well. In addition, by focusing the algorithm on predicting judges’ decisions, rather than defendant behavior, we gain some insight into decision-making: a key problem appears to be that judges to respond to ‘noise’ as if it were signal. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals….(More)”

Rules for a Flat World – Why Humans Invented Law and How to Reinvent It for a Complex Global Economy


Book by Gillian Hadfield: “… picks up where New York Times columnist Thomas Friedman left off in his influential 2005 book, The World is Flat. Friedman was focused on the infrastructure of communications and technology-the new web-based platform that allows business to follow the hunt for lower costs, higher value and greater efficiency around the planet seemingly oblivious to the boundaries of nation states. Hadfield peels back this technological platform to look at the ‘structure that lies beneath’—our legal infrastructure, the platform of rules about who can do what, when and how. Often taken for granted, economic growth throughout human history has depended at least as much on the evolution of new systems of rules to support ever-more complex modes of cooperation and trade as it has on technological innovation. When Google rolled out YouTube in over one hundred countries around the globe simultaneously, for example, it faced not only the challenges of technology but also the staggering problem of how to build success in the context of a bewildering and often conflicting patchwork of nation-state-based laws and legal systems affecting every aspect of the business-contract, copyright, encryption, censorship, advertising and more. Google is not alone. A study presented at the World Economic Forum in Davos in 2011 found that for global firms, the number one challenge of the modern economy is increasing complexity, and the number one source of complexity is law. Today, even our startups, the engines of economic growth, are global from Day One.

Put simply, the law and legal methods on which we currently rely have failed to evolve along with technology. They are increasingly unable to cope with the speed, complexity, and constant border-crossing of our new globally inter-connected environment. Our current legal systems are still rooted in the politics-based nation state platform on which the industrial revolution was built. Hadfield argues that even though these systems supported fantastic growth over the past two centuries, today they are too slow, costly, cumbersome and localized to support the exponential rise in economic complexity they fostered. …

The answer to our troubles with law, however, is not the one critics usually reach for—to have less of it. Recognizing that law provides critical infrastructure for the cooperation and collaboration on which economic growth is built is the first step, Hadfield argues, to building a legal environment that does more of what we need it to do and less of what we don’t. …(More)”

Troopers Use ‘Big Data’ to Predict Crash Sites


Jenni Bergal at Pew Charitable Trusts: “As Tennessee Highway Patrol Sgt. Anthony Griffin patrolled an area near Murfreesboro one morning in January 2014, he gave a young woman a ticket for driving her Geo Prizm without wearing a seat belt.

About four hours later, Griffin was dispatched to help out at the scene of a major accident a few miles away. A car had veered off the road, sailed over a bridge, struck a utility pole and landed in a frozen pond. When Griffin went to question the driver, who appeared uninjured, he was shocked to find it was the same woman he had ticketed earlier.

She told him she had been wearing her seat belt only because he had given her a ticket. She believed it had saved her life. And if it hadn’t been for new crash prediction software his agency was using, Griffin said he wouldn’t have been in that spot to issue her the ticket.

“I’m in my 21st year of law enforcement and I’ve never come across anything where I could see the fruit of my work in this fashion,” said Griffin, who is now a lieutenant. “It was amazing.”

As more and more states use “big data” for everything from catching fraudsters to reducing heath care costs, some highway patrols are tapping it to predict where serious or fatal traffic accidents are likely to take place so they can try to prevent them….

Indiana State Police decided to take a different approach, and are making their predictive crash analytics program available to the public, as well as troopers.

A color-coded Daily Crash Prediction map, which went online in November, pulls together data that includes crash reports from every police agency in the state dating to 2004, daily traffic volume, historical weather information and the dates of major holidays, said First Sgt. Rob Simpson….(More)”

Governance and the Law


World Development Report 2017: “Why are carefully designed, sensible policies too often not adopted or implemented? When they are, why do they often fail to generate development outcomes such as security, growth, and equity? And why do some bad policies endure? This World Development Report 2017: Governance and the Law addresses these fundamental questions, which are at the heart of development. Policy making and policy implementation do not occur in a vacuum. Rather, they take place in complex political and social settings, in which individuals and groups with unequal power interact within changing rules as they pursue conflicting interests. The process of these interactions is what this Report calls governance, and the space in which these interactions take place, the policy arena. The capacity of actors to commit and their willingness to cooperate and coordinate to achieve socially desirable goals are what matter for effectiveness. However, who bargains, who is excluded, and what barriers block entry to the policy arena determine the selection and implementation of policies and, consequently, their impact on development outcomes. Exclusion, capture, and clientelism are manifestations of power asymmetries that lead to failures to achieve security, growth, and equity. The distribution of power in society is partly determined by history. Yet, there is room for positive change. This Report reveals that governance can mitigate, even overcome, power asymmetries to bring about more effective policy interventions that achieve sustainable improvements in security, growth, and equity. This happens by shifting the incentives of those with power, reshaping their preferences in favor of good outcomes, and taking into account the interests of previously excluded participants. These changes can come about through bargains among elites and greater citizen engagement, as well as by international actors supporting rules that strengthen coalitions for reform….(More)”.

Selected Readings on Algorithmic Scrutiny


By Prianka Srinivasan, Andrew Young and Stefaan Verhulst

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of algorithmic scrutiny was originally published in 2017.

Introduction

From government policy, to criminal justice, to our news feeds; to business and consumer practices, the processes that shape our lives both online and off are more and more driven by data and the complex algorithms used to form rulings or predictions. In most cases, these algorithms have created “black boxes” of decision making, where models remain inscrutable and inaccessible. It should therefore come as no surprise that several observers and policymakers are calling for more scrutiny of how algorithms are designed and work, particularly when their outcomes convey intrinsic biases or defy existing ethical standards.

While the concern about values in technology design is not new, recent developments in machine learning, artificial intelligence and the Internet of Things have increased the urgency to establish processes and develop tools to scrutinize algorithms.

In what follows, we have curated several readings covering the impact of algorithms on:

  • Information Intermediaries;
  • Governance
  • Finance
  • Justice

In addition we have selected a few readings that provide insight on possible processes and tools to establish algorithmic scrutiny.

Selected Reading List

Information Intermediaries

Governance

Consumer Finance

Justice

Tools & Process Toward Algorithmic Scrutiny

Annotated Selected Reading List

Information Intermediaries

Diakopoulos, Nicholas. “Algorithmic accountability: Journalistic investigation of computational power structures.” Digital Journalism 3.3 (2015): 398-415. http://bit.ly/.

  • This paper attempts to substantiate the notion of accountability for algorithms, particularly how they relate to media and journalism. It puts forward the notion of “algorithmic power,” analyzing the framework of influence such systems exert, and also introduces some of the challenges in the practice of algorithmic accountability, particularly for computational journalists.
  • Offers a basis for how algorithms can be analyzed, built in terms of the types of decisions algorithms make in prioritizing, classifying, associating, and filtering information.

Diakopoulos, Nicholas, and Michael Koliska. “Algorithmic transparency in the news media.” Digital Journalism (2016): 1-20. http://bit.ly/2hMvXdE.

  • This paper analyzes the increased use of “computational journalism,” and argues that though transparency remains a key tenet of journalism, the use of algorithms in gathering, producing and disseminating news undermines this principle.
  • It first analyzes what the ethical principle of transparency means to journalists and the media. It then highlights the findings from a focus-group study, where 50 participants from the news media and academia were invited to discuss three different case studies related to the use of algorithms in journalism.
  • They find two key barriers to algorithmic transparency in the media: “(1) a lack of business incentives for disclosure, and (2) the concern of overwhelming end-users with too much information.”
  • The study also finds a variety of opportunities for transparency across the “data, model, inference, and interface” components of an algorithmic system.

Napoli, Philip M. “The algorithm as institution: Toward a theoretical framework for automated media production and consumption.” Fordham University Schools of Business Research Paper (2013). http://bit.ly/2hKBHqo

  • This paper puts forward an analytical framework to discuss the algorithmic content creation of media and journalism in an attempt to “close the gap” on theory related to automated media production.
  • By borrowing concepts from institutional theory, the paper finds that algorithms are distinct forms of media institutions, and the cultural and political implications of this interpretation.
  • It urges further study in the field of “media sociology” to further unpack the influence of algorithms, and their role in institutionalizing certain norms, cultures and ways of thinking.

Introna, Lucas D., and Helen Nissenbaum. “Shaping the Web: Why the politics of search engines matters.” The Information Society 16.3 (2000): 169-185. http://bit.ly/2ijzsrg.

  • This paper, published 16 years ago, provides an in-depth account of some of the risks related to search engine optimizations, and the biases and harms these can introduce, particularly on the nature of politics.
  • Suggests search engines can be designed to account for these political dimensions, and better correlate with the ideal of the World Wide Web as being a place that is open, accessible and democratic.
  • According to the paper, policy (and not the free market) is the only way to spur change in this field, though the current technical solutions we have introduce further challenges.

Gillespie, Tarleton. “The Relevance of Algorithms.” Media
technologies: Essays on communication, materiality, and society (2014): 167. http://bit.ly/2h6ASEu.

  • This paper suggests that the extended use of algorithms, to the extent that they undercut many aspects of our lives, (Tarleton calls this public relevance algorithms) are fundamentally “producing and certifying knowledge.” In this ability to create a particular “knowledge logic,” algorithms are a primary feature of our information ecosystem.
  • The paper goes on to map 6 dimensions of these public relevance algorithms:
    • Patterns of inclusion
    • Cycles of anticipation
    • The evaluation of relevance
    • The promise of algorithmic objectivity
    • Entanglement with practice
    • The production of calculated publics
  • The paper concludes by highlighting the need for a sociological inquiry into the function, implications and contexts of algorithms, and to “soberly  recognize their flaws and fragilities,” despite the fact that much of their inner workings remain hidden.

Rainie, Lee and Janna Anderson. “Code-Dependent: Pros and Cons of the Algorithm Age.” Pew Research Center. February 8, 2017. http://bit.ly/2kwnvCo.

  • This Pew Research Center report examines the benefits and negative impacts of algorithms as they become more influential in different sectors and aspects of daily life.
  • Through a scan of the research and practice, with a particular focus on the research of experts in the field, Rainie and Anderson identify seven key themes of the burgeoning Algorithm Age:
    • Algorithms will continue to spread everywhere
    • Good things lie ahead
    • Humanity and human judgment are lost when data and predictive modeling become paramount
    • Biases exist in algorithmically-organized systems
    • Algorithmic categorizations deepen divides
    • Unemployment will rise; and
    • The need grows for algorithmic literacy, transparency and oversight

Tufekci, Zeynep. “Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency.” Journal on Telecommunications & High Technology Law 13 (2015): 203. http://bit.ly/1JdvCGo.

  • This paper establishes some of the risks and harms in regard to algorithmic computation, particularly in their filtering abilities as seen in Facebook and other social media algorithms.
  • Suggests that the editorial decisions performed by algorithms can have significant influence on our political and cultural realms, and categorizes the types of harms that algorithms may have on individuals and their society.
  • Takes two case studies–one from the social media coverage of the Ferguson protests, the other on how social media can influence election turnouts–to analyze the influence of algorithms. In doing so, this paper lays out the “tip of the iceberg” in terms of some of the challenges and ethical concerns introduced by algorithmic computing.

Mittelstadt, Brent, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society (2016): 3(2). http://bit.ly/2kWNwL6

  • This paper provides significant background and analysis of the ethical context of algorithmic decision-making. It primarily seeks to map the ethical consequences of algorithms, which have adopted the role of a mediator between data and action within societies.
  • Develops a conceptual map of 6 ethical concerns:
      • Inconclusive Evidence
      • Inscrutable Evidence
      • Misguided Evidence
      • Unfair Outcomes
      • Transformative Effects
    • Traceability
  • The paper then reviews existing literature, which together with the map creates a structure to inform future debate.

Governance

Janssen, Marijn, and George Kuk. “The challenges and limits of big data algorithms in technocratic governance.” Government Information Quarterly 33.3 (2016): 371-377. http://bit.ly/2hMq4z6.

  • In regarding the centrality of algorithms in enforcing policy and extending governance, this paper analyzes the “technocratic governance” that has emerged by the removal of humans from decision making processes, and the inclusion of algorithmic automation.
  • The paper argues that the belief in technocratic governance producing neutral and unbiased results, since their decision-making processes are uninfluenced by human thought processes, is at odds with studies that reveal the inherent discriminatory practices that exist within algorithms.
  • Suggests that algorithms are still bound by the biases of designers and policy-makers, and that accountability is needed to improve the functioning of an algorithm. In order to do so, we must acknowledge the “intersecting dynamics of algorithm as a sociotechnical materiality system involving technologies, data and people using code to shape opinion and make certain actions more likely than others.”

Just, Natascha, and Michael Latzer. “Governance by algorithms: reality construction by algorithmic selection on the Internet.” Media, Culture & Society (2016): 0163443716643157. http://bit.ly/2h6B1Yv.

  • This paper provides a conceptual framework on how to assess the governance potential of algorithms, asking how technology and software governs individuals and societies.
  • By understanding algorithms as institutions, the paper suggests that algorithmic governance puts in place more evidence-based and data-driven systems than traditional governance methods. The result is a form of governance that cares more about effects than causes.
  • The paper concludes by suggesting that algorithmic selection on the Internet tends to shape individuals’ realities and social orders by “increasing individualization, commercialization, inequalities, deterritorialization, and decreasing transparency, controllability, predictability.”

Consumer Finance

Hildebrandt, Mireille. “The dawn of a critical transparency right for the profiling era.” Digital Enlightenment Yearbook 2012 (2012): 41-56. http://bit.ly/2igJcGM.

  • Analyzes the use of consumer profiling by online businesses in order to target marketing and services to their needs. By establishing how this profiling relates to identification, the author also offers some of the threats to democracy and the right of autonomy posed by these profiling algorithms.
  • The paper concludes by suggesting that cross-disciplinary transparency is necessary to design more accountable profiling techniques that can match the extension of “smart environments” that capture ever more data and information from users.

Reddix-Smalls, Brenda. “Credit Scoring and Trade Secrecy: An Algorithmic Quagmire or How the Lack of Transparency in Complex Financial Models Scuttled the Finance Market.” UC Davis Business Law Journal 12 (2011): 87. http://bit.ly/2he52ch

  • Analyzes the creation of predictive risk models in financial markets through algorithmic systems, particularly in regard to credit scoring. It suggests that these models were corrupted in order to maintain a competitive market advantage: “The lack of transparency and the legal environment led to the use of these risk models as predatory credit pricing instruments as opposed to accurate credit scoring predictive instruments.”
  • The paper suggests that without greater transparency of these financial risk model, and greater regulation over their abuse, another financial crisis like that in 2008 is highly likely.

Justice

Aas, Katja Franko. “Sentencing Transparency in the Information Age.” Journal of Scandinavian Studies in Criminology and Crime Prevention 5.1 (2004): 48-61. http://bit.ly/2igGssK.

  • This paper questions the use of predetermined sentencing in the US judicial system through the application of computer technology and sentencing information systems (SIS). By assessing the use of these systems between the English speaking world and Norway, the author suggests that such technological approaches to sentencing attempt to overcome accusations of mistrust, uncertainty and arbitrariness often leveled against the judicial system.
  • However, in their attempt to rebuild trust, such technological solutions can be seen as an attempt to remedy a flawed view of judges by the public. Therefore, the political and social climate must be taken into account when trying to reform these sentencing systems: “The use of the various sentencing technologies is not only, and not primarily, a matter of technological development. It is a matter of a political and cultural climate and the relations of trust in a society.”

Cui, Gregory. “Evidence-Based Sentencing and the Taint of Dangerousness.” Yale Law Journal Forum 125 (2016): 315-315. http://bit.ly/1XLAvhL.

  • This short essay submitted on the Yale Law Journal Forum calls for greater scrutiny of “evidence based sentencing,” where past data is computed and used to predict future criminal behavior of a defendant. The author suggests that these risk models may undermine the Constitution’s prohibition of bills of attainder, and also are unlawful for inflicting punishment without a judicial trial.

Tools & Processes Toward Algorithmic Scrutiny

Ananny, Mike and Crawford, Kate. “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.” New Media & Society. SAGE Publications. 2016. http://bit.ly/2hvKc5x.

  • This paper attempts to critically analyze calls to improve the transparency of algorithms, asking how historically we are able to confront the limitations of the transparency ideal in computing.
  • By establishing “transparency as an ideal” the paper tracks the philosophical and historical lineage of this principle, attempting to establish what laws and provisions were put in place across the world to keep up with and enforce this ideal.
  • The paper goes on to detail the limits of transparency as an ideal, arguing, amongst other things, that it does not necessarily build trust, it privileges a certain function (seeing) over others (say, understanding) and that it has numerous technical limitations.
  • The paper ends by concluding that transparency is an inadequate way to govern algorithmic systems, and that accountability must acknowledge the ability to govern across systems.

Datta, Anupam, Shayak Sen, and Yair Zick. “Algorithmic Transparency via Quantitative Input Influence.Proceedings of 37th IEEE Symposium on Security and Privacy. 2016. http://bit.ly/2hgyLTp.

  • This paper develops what is called a family of Quantitative Input Influence (QII) measures “that capture the degree of influence of inputs on outputs of systems.” The attempt is to theorize a transparency report that is to accompany any algorithmic decisions made, in order to explain any decisions and detect algorithmic discrimination.
  • QII works by breaking “correlations between inputs to allow causal reasoning, and computes the marginal influence of inputs in situations where inputs cannot affect outcomes alone.”
  • Finds that these QII measures are useful in scrutinizing algorithms when “black box” access is available.

Goodman, Bryce, and Seth Flaxman. “European Union regulations on algorithmic decision-making and a right to explanationarXiv preprint arXiv:1606.08813 (2016). http://bit.ly/2h6xpWi.

  • This paper analyzes the implications of a new EU law, to be enacted in 2018, that calls to “restrict automated individual decision-making (that is, algorithms that make decisions based on user level predictors) which ‘significantly affect’ users.” The law will also allow for a “right to explanation” where users can ask for an explanation behind automated decision made about them.
  • The paper, while acknowledging the challenges in implementing such laws, suggests that such regulations can spur computer scientists to create algorithms and decision making systems that are more accountable, can provide explanations, and do not produce discriminatory results.
  • The paper concludes by stating algorithms and computer systems should not aim to be simply efficient, but also fair and accountable. It is optimistic about the ability to put in place interventions to account for and correct discrimination.

Kizilcec, René F. “How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2016. http://bit.ly/2hMjFUR.

  • This paper studies how transparency of algorithms affects our impression of trust by conducting an online field experiment, where participants enrolled in a MOOC a given different explanations for the computer generated grade given in their class.
  • The study found that “Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust.”
  • In conclusion, the study found that a balance of transparency was needed to maintain trust amongst the participants, suggesting that pure transparency of algorithmic processes and results may not correlate with high feelings of trust amongst users.

Kroll, Joshua A., et al. “Accountable Algorithms.” University of Pennsylvania Law Review 165 (2016). http://bit.ly/2i6ipcO.

  • This paper suggests that policy and legal standards need to be updated given the increased use of algorithms to perform tasks and make decisions in arenas that people once did. An “accountability mechanism” is lacking in many of these automated decision making processes.
  • The paper argues that mere transparency through the divulsion of source code is inadequate when confronting questions of accountability. Rather, technology itself provides a key to create algorithms and decision making apparatuses more inline with our existing political and legal frameworks.
  • The paper assesses some computational techniques that may provide possibilities to create accountable software and reform specific cases of automated decisionmaking. For example, diversity and anti-discrimination orders can be built into technology to ensure fidelity to policy choices.

Documenting Hate


Shan Wang at NiemanLab: “A family’s garage vandalized with an image of a swastika and a hateful message targeted at Arabs. Jewish community centers receiving bomb threats. These are just a slice of the incidents of hate across the country after the election of Donald Trump — but getting reliable data on the prevalence of hate and bias crimes to answer questions about whether these sorts of crimes are truly on the rise is nearly impossible.

ProPublica, which led an effort of more than a thousand reporters and students across the U.S. to cover voting problems on Election Day as part of its Electionland project, is now leaning on the collaborative and data-driven Electionland model to track and cover hate crimes.

Documenting Hate, launched last week, is a hate and bias crime-tracking project headed up by ProPublica and supported by a coalition of news and digital media organizations, universities, and civil rights groups like Southern Poverty Law Center (which has been tracking hate groups across the country). Like Electionland, the project is seeking local partners, and will share its data with and guide local reporters interested in writing relevant stories.

“Hate crimes are inadequately tracked,” Scott Klein, assistant manager editor at ProPublica, said. “Local police departments do not report up hate crimes in any consistent way, so the federal data is woefully inadequate, and there’s no good national data on hate crimes. The data is at best locked up by local police departments, and the best we can know is a local undercount.”

Documenting Hate offers a form for anyone to report a hate or bias crime (emphasizing that “we are not law enforcement and will not report this information to the police,” nor will it “share your name and contact information with anybody outside our coalition without your permission”). ProPublica is working with Meedan (whose verification platform Check it also used for Electionland) and crowdsourced crisis-mapping group Ushahidi, as well as several journalism schools, to verify reports coming in through social channels. Ken Schwencke, who helped build the infrastructure for Electionland, is now focused on things like building backend search databases for Documenting Hate, which can be shared with local reporters. The hope is that many stories, interactives, and a comprehensive national database will emerge and paint a fuller picture of the scope of hate crimes in the U.S.

ProPublica is actively seeking local partners, who will have access to the data as well as advice on how to report on sensitive information (no partners to announce just yet, though there’s been plenty of inbound interest, according to Klein). Some of the organizations working with ProPublica were already seeking reader stories of their own….(More)”.

Analytics, Policy, and Governance


“The first available textbook on the rapidly growing and increasingly important field of government analytics” edited by Benjamin Ginsberg, Kathy Wagner Hill and Jennifer Bachner:  “This first textbook on the increasingly important field of government analytics provides invaluable knowledge and training for students of government in the synthesis, interpretation, and communication of “big data,” which is now an integral part of governance and policy making. Integrating all the major components of this rapidly growing field, this invaluable text explores the intricate relationship of data analytics to governance while providing innovative strategies for the retrieval and management of information….(More)”