Why Big Data Is a Big Deal for Cities


John M. Kamensky in Governing: “We hear a lot about “big data” and its potential value to government. But is it really fulfilling the high expectations that advocates have assigned to it? Is it really producing better public-sector decisions? It may be years before we have definitive answers to those questions, but new research suggests that it’s worth paying a lot of attention to.

University of Kansas Prof. Alfred Ho recently surveyed 65 mid-size and large cities to learn what is going on, on the front line, with the use of big data in making decisions. He found that big data has made it possible to “change the time span of a decision-making cycle by allowing real-time analysis of data to instantly inform decision-making.” This decision-making occurs in areas as diverse as program management, strategic planning, budgeting, performance reporting and citizen engagement.

Cities are natural repositories of big data that can be integrated and analyzed for policy- and program-management purposes. These repositories include data from public safety, education, health and social services, environment and energy, culture and recreation, and community and business development. They include both structured data, such as financial and tax transactions, and unstructured data, such as recorded sounds from gunshots and videos of pedestrian movement patterns. And they include data supplied by the public, such as the Boston residents who use a phone app to measure road quality and report problems.

These data repositories, Ho writes, are “fundamental building blocks,” but the challenge is to shift the ownership of data from separate departments to an integrated platform where the data can be shared.

There’s plenty of evidence that cities are moving in that direction and that they already are systematically using big data to make operational decisions. Among the 65 cities that Ho examined, he found that 49 have “some form of data analytics initiatives or projects” and that 30 have established “a multi-departmental team structure to do strategic planning for these data initiatives.”….The effective use of big data can lead to dialogs that cut across school-district, city, county, business and nonprofit-sector boundaries. But more importantly, it provides city leaders with the capacity to respond to citizens’ concerns more quickly and effectively….(More)”

Organizational crowdsourcing


Jeremy Morgan at Lippincott: “One of the most consequential insights from the study of organizational culture happens to have an almost irresistible grounding in basic common sense. When attempting to solve the challenges of today’s businesses, inviting a broad slice of an employee population yields more creative, actionable solutions than restricting the conversation to a small strategy or leadership team.

This recognition, that in order to uncover new business ideas and innovations, organizations must foster listening cultures and a meritocracy of best thinking, is fueling interest in organizational crowdsourcing — a discipline focused on employee connection, collaboration and ideation. Leaders at companies such as Roche, Bank of the West, Merck, Facebook and IBM, along with countless Silicon Valley companies for whom the “hackathon” is a major cultural event, have embraced employee crowdsourcing as a way to unlock organizational knowledge and promote empathy through technology.

The benefits of internal crowdsourcing are clear. First, it ensures that a company’s understanding of key change drivers and potential strategic priorities is grounded in the organization’s everyday reality and not abstract hypotheses developed by a team of strategists. Second, employees inherently believe in and want to own the implementation of ideas that they generate through crowdsourcing. These are ideas borne of the culture for the culture, and are less likely to run aground on the rocks of employee indifference….

How can this be achieved through organizational crowdsourcing?

There is no out-of-the-box solution. Each campaign has to organically surface areas of focus for further inquiries, develop a framework and set of questions to guide participation and ignite conversations, and then analyze and communicate results in a way that helps bring solutions to life. But there are some key principles that will maximize the success of any crowdsourcing effort.

Obtaining insightful and actionable answers boils down to asking the questions at just the right altitude. If they’re too high up, too broad and open-ended, the usefulness of the feedback will suffer. If the questions are too broad — “How can we make our workplace better?” — you will likely hear responses like “juice bars” and “massage therapists.” If the questions are too narrow — “What kind of lighting do we need in our conference rooms?” — you limit the opportunity of people to use their creativity. However, the answers are likely to spark a conversation if people are asked, “How can we create spaces that allow us to generate ideas more effectively?” Conversation will flow to discussion of breaking down physical barriers in office design, building social “hubs” and investing in live events that allow employees from disparate geographies to meet in person and solve problems together.

On the technology side, crowdsourcing platforms such as Jive Software and UserVoice, among others, make it easy to bring large numbers of employees together to gather, build upon and prioritize new ideas and innovation efforts, from process simplification and product development to the transformation of customer experiences. Respondents can vote on other people’s suggestions and add comments.

By facilitating targeted conversations across times zones, geographies and corporate functions, crowdsourcing makes possible a new way of listening: of harnessing an organization’s collective wisdom to achieve action by a united and inspired employee population. It’s amazing to see the thoughtfulness, precision and energy unleashed by crowdsourcing efforts. People genuinely want to contribute to their company’s success if you open the doors and let them.

Taking a page from the Silicon Valley hackathon, organizational crowdsourcing campaigns are structured as events of limited duration focused on a specific challenge or business problem….(More)”

Corporate Social Responsibility for a Data Age


Stefaan G. Verhulst in the Stanford Social Innovation Review: “Proprietary data can help improve and save lives, but fully harnessing its potential will require a cultural transformation in the way companies, governments, and other organizations treat and act on data….

We live, as it is now common to point out, in an era of big data. The proliferation of apps, social media, and e-commerce platforms, as well as sensor-rich consumer devices like mobile phones, wearable devices, commercial cameras, and even cars generate zettabytes of data about the environment and about us.

Yet much of the most valuable data resides with the private sector—for example, in the form of click histories, online purchases, sensor data, and call data records. This limits its potential to benefit the public and to turn data into a social asset. Consider how data held by business could help improve policy interventions (such as better urban planning) or resiliency at a time of climate change, or help design better public services to increase food security.

Data responsibility suggests steps that organizations can take to break down these private barriers and foster so-called data collaboratives, or ways to share their proprietary data for the public good. For the private sector, data responsibility represents a new type of corporate social responsibility for the 21st century.

While Nepal’s Ncell belongs to a relatively small group of corporations that have shared their data, there are a few encouraging signs that the practice is gaining momentum. In Jakarta, for example, Twitter exchanged some of its data with researchers who used it to gather and display real-time information about massive floods. The resulting website, PetaJakarta.org, enabled better flood assessment and management processes. And in Senegal, the Data for Development project has brought together leading cellular operators to share anonymous data to identify patterns that could help improve health, agriculture, urban planning, energy, and national statistics.

Examples like this suggest that proprietary data can help improve and save lives. But to fully harness the potential of data, data holders need to fulfill at least three conditions. I call these the “the three pillars of data responsibility.”…

The difficulty of translating insights into results points to some of the larger social, political, and institutional shifts required to achieve the vision of data responsibility in the 21st century. The move from data shielding to data sharing will require that we make a cultural transformation in the way companies, governments, and other organizations treat and act on data. We must incorporate new levels of pro-activeness, and make often-unfamiliar commitments to transparency and accountability.

By way of conclusion, here are four immediate steps—essential but not exhaustive—we can take to move forward:

  1. Data holders should issue a public commitment to data responsibility so that it becomes the default—an expected, standard behavior within organizations.
  2. Organizations should hire data stewards to determine what and when to share, and how to protect and act on data.
  3. We must develop a data responsibility decision tree to assess the value and risk of corporate data along the data lifecycle.
  4. Above all, we need a data responsibility movement; it is time to demand data responsibility to ensure data improves and safeguards people’s lives…(More)”

Rules for a Flat World – Why Humans Invented Law and How to Reinvent It for a Complex Global Economy


Book by Gillian Hadfield: “… picks up where New York Times columnist Thomas Friedman left off in his influential 2005 book, The World is Flat. Friedman was focused on the infrastructure of communications and technology-the new web-based platform that allows business to follow the hunt for lower costs, higher value and greater efficiency around the planet seemingly oblivious to the boundaries of nation states. Hadfield peels back this technological platform to look at the ‘structure that lies beneath’—our legal infrastructure, the platform of rules about who can do what, when and how. Often taken for granted, economic growth throughout human history has depended at least as much on the evolution of new systems of rules to support ever-more complex modes of cooperation and trade as it has on technological innovation. When Google rolled out YouTube in over one hundred countries around the globe simultaneously, for example, it faced not only the challenges of technology but also the staggering problem of how to build success in the context of a bewildering and often conflicting patchwork of nation-state-based laws and legal systems affecting every aspect of the business-contract, copyright, encryption, censorship, advertising and more. Google is not alone. A study presented at the World Economic Forum in Davos in 2011 found that for global firms, the number one challenge of the modern economy is increasing complexity, and the number one source of complexity is law. Today, even our startups, the engines of economic growth, are global from Day One.

Put simply, the law and legal methods on which we currently rely have failed to evolve along with technology. They are increasingly unable to cope with the speed, complexity, and constant border-crossing of our new globally inter-connected environment. Our current legal systems are still rooted in the politics-based nation state platform on which the industrial revolution was built. Hadfield argues that even though these systems supported fantastic growth over the past two centuries, today they are too slow, costly, cumbersome and localized to support the exponential rise in economic complexity they fostered. …

The answer to our troubles with law, however, is not the one critics usually reach for—to have less of it. Recognizing that law provides critical infrastructure for the cooperation and collaboration on which economic growth is built is the first step, Hadfield argues, to building a legal environment that does more of what we need it to do and less of what we don’t. …(More)”

State of Open Corporate Data: Wins and Challenges Ahead


Sunlight Foundation: “For many people working to open data and reduce corruption, the past year could be summed up in two words: “Panama Papers.” The transcontinental investigation by a team from International Center of Investigative Journalists (ICIJ) blew open the murky world of offshore company registration. It put corporate transparency high on the agenda of countries all around the world and helped lead to some notable advances in access to official company register data….

While most companies are created and operated for legitimate economic activity,  there is a small percentage that aren’t. Entities involved in corruption, money laundering, fraud and tax evasion frequently use such companies as vehicles for their criminal activity. “The Idiot’s Guide to Money Laundering from Global Witness” shows how easy it is to use layer after layer of shell companies to hide the identity of the person who controls and benefits from the activities of the network. The World Bank’s “Puppet Masters” report found that over 70% of grand corruption cases, in fact, involved the use of offshore vehicles.

For years, OpenCorporates has advocated for company information to be in the public domain as open data, so it is usable and comparable.  It was the public reaction to Panama Papers, however, that made it clear that due diligence requires global data sets and beneficial registries are key for integrity and progress.

The call for accountability and action was clear from the aftermath of the leak. ICIJ, the journalists involved and advocates have called for tougher action on prosecutions and more transparency measures: open corporate registers and beneficial ownership registers. A series of workshops organized by the B20 showed that business also needed public beneficial ownership registers….

Last year the UK became the first country in the world to collect and publish who controls and benefits from companies in a structured format, and as open data. Just a few days later, we were able to add the information in OpenCorporates. The UK data, therefore, is one of a kind, and has been highly anticipated by transparency skeptics and advocates advocates alike. So fa,r things are looking good. 15 other countries have committed to having a public beneficial ownership register including Nigeria, Afghanistan, Germany, Indonesia, New Zealand and Norway. Denmark has announced its first public beneficial ownership data will be published in June 2017. It’s likely to be open data.

This progress isn’t limited to beneficial ownership. It is also being seen in the opening up of corporate registers . These are what OpenCorporates calls “core company data”. In 2016, more countries started releasing company register as open data, including Japan, with over 4.4 million companies, IsraelVirginiaSloveniaTexas, Singapore and Bulgaria. We’ve also had a great start to 2017 , with France publishing their central company database as open data on January 5th.

As more states have embracing open data, the USA jumped from average score of 19/100 to 30/100. Singapore rose from 0 to 20. The Slovak Republic from 20 to 40. Bulgaria wet from 35 to 90.  Japan rose from 0 to 70 — the biggest increase of the year….(More)”

Social Media and the Internet of Things towards Data-Driven Policymaking in the Arab World: Potential, Limits and Concerns


Paper by Fadi Salem: “The influence of social media has continued to grow globally over the past decade. During 2016 social media played a highly influential role in what has been described as a “post truth” era in policymaking, diplomacy and political communication. For example, social media “bots” arguably played a key role in influencing public opinion globally, whether on the political or public policy levels. Such practices rely heavily on big data analytics, artificial intelligence and machine learning algorithms, not just in gathering and crunching public views and sentiments, but more so in pro-actively influencing public opinions, decisions and behaviors. Some of these government practices undermined traditional information mediums, triggered foreign policy crises, impacted political communication and disrupted established policy formulation cycles.

On the other hand, the digital revolution has expanded the horizon of possibilities for development, governance and policymaking. A new disruptive transformation is characterized by a fusion of inter-connected technologies where the digital, physical and biological worlds converge. This inter-connectivity is generating — and consuming — an enormous amount of data that is changing the ways policies are conducted, decisions are taken and day-to-day operations are carried out. Within this context, ‘big data’ applications are increasingly becoming critical elements of policymaking. Coupled with the rise of a critical mass of social media users globally, this ubiquitous connectivity and data revolution is promising major transformations in modes of governance, policymaking and citizen-government interaction.

In the Arab region, observations from public sector and decision-making organization suggest that there is limited understanding of the real potential, the limitations, and the public concerns surrounding these big data sources in the Arab region. This report contextualizes the findings in light of the socio-technical transformations taking place in the Arab region, by exploring the growth of social media and building on past editions in the series. The objective is to explore and assess multiple aspects of the ongoing digital transformation in the Arab world and highlight some of the policy implications on a regional level. More specifically, the report aims to better inform our understanding of the convergence of social media and IoT data as sources of big data and their potential impact on policymaking and governance in the region. Ultimately, in light of the availability of massive amount of data from physical objects and people, the questions tackled in the research are: What is the potential for data-driven policymaking and governance in the region? What are the limitations? And most importantly, what are the public concerns that need to be addressed by policymakers while they embark on next phase of the digital governance transformation in the region?

In the Arab region, there are already numerous experiments and applications where data from social media and the “Internet of Things” (IoT) are informing and influencing government practices as sources of big data, effectively changing how societies and governments interact. The report has two main parts. In the first part, we explore the questions discussed in the previous paragraphs through a regional survey spanning the 22 Arab countries. In the second part, it explores growth and usage trends of influential social media platforms across the region, including Facebook, Twitter, Linkedin and, for the first time, Instagram. The findings highlight important changes — and some stagnation — in the ways social media is infiltrating demographic layers in Arab societies, be it gender, age and language. Together, the findings provide important insights for guiding policymakers, business leaders and development efforts. More specifically, these findings can contribute to shaping directions and informing decisions on the future of governance and development in the Arab region….(More)”

Selected Readings on Algorithmic Scrutiny


By Prianka Srinivasan, Andrew Young and Stefaan Verhulst

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of algorithmic scrutiny was originally published in 2017.

Introduction

From government policy, to criminal justice, to our news feeds; to business and consumer practices, the processes that shape our lives both online and off are more and more driven by data and the complex algorithms used to form rulings or predictions. In most cases, these algorithms have created “black boxes” of decision making, where models remain inscrutable and inaccessible. It should therefore come as no surprise that several observers and policymakers are calling for more scrutiny of how algorithms are designed and work, particularly when their outcomes convey intrinsic biases or defy existing ethical standards.

While the concern about values in technology design is not new, recent developments in machine learning, artificial intelligence and the Internet of Things have increased the urgency to establish processes and develop tools to scrutinize algorithms.

In what follows, we have curated several readings covering the impact of algorithms on:

  • Information Intermediaries;
  • Governance
  • Finance
  • Justice

In addition we have selected a few readings that provide insight on possible processes and tools to establish algorithmic scrutiny.

Selected Reading List

Information Intermediaries

Governance

Consumer Finance

Justice

Tools & Process Toward Algorithmic Scrutiny

Annotated Selected Reading List

Information Intermediaries

Diakopoulos, Nicholas. “Algorithmic accountability: Journalistic investigation of computational power structures.” Digital Journalism 3.3 (2015): 398-415. http://bit.ly/.

  • This paper attempts to substantiate the notion of accountability for algorithms, particularly how they relate to media and journalism. It puts forward the notion of “algorithmic power,” analyzing the framework of influence such systems exert, and also introduces some of the challenges in the practice of algorithmic accountability, particularly for computational journalists.
  • Offers a basis for how algorithms can be analyzed, built in terms of the types of decisions algorithms make in prioritizing, classifying, associating, and filtering information.

Diakopoulos, Nicholas, and Michael Koliska. “Algorithmic transparency in the news media.” Digital Journalism (2016): 1-20. http://bit.ly/2hMvXdE.

  • This paper analyzes the increased use of “computational journalism,” and argues that though transparency remains a key tenet of journalism, the use of algorithms in gathering, producing and disseminating news undermines this principle.
  • It first analyzes what the ethical principle of transparency means to journalists and the media. It then highlights the findings from a focus-group study, where 50 participants from the news media and academia were invited to discuss three different case studies related to the use of algorithms in journalism.
  • They find two key barriers to algorithmic transparency in the media: “(1) a lack of business incentives for disclosure, and (2) the concern of overwhelming end-users with too much information.”
  • The study also finds a variety of opportunities for transparency across the “data, model, inference, and interface” components of an algorithmic system.

Napoli, Philip M. “The algorithm as institution: Toward a theoretical framework for automated media production and consumption.” Fordham University Schools of Business Research Paper (2013). http://bit.ly/2hKBHqo

  • This paper puts forward an analytical framework to discuss the algorithmic content creation of media and journalism in an attempt to “close the gap” on theory related to automated media production.
  • By borrowing concepts from institutional theory, the paper finds that algorithms are distinct forms of media institutions, and the cultural and political implications of this interpretation.
  • It urges further study in the field of “media sociology” to further unpack the influence of algorithms, and their role in institutionalizing certain norms, cultures and ways of thinking.

Introna, Lucas D., and Helen Nissenbaum. “Shaping the Web: Why the politics of search engines matters.” The Information Society 16.3 (2000): 169-185. http://bit.ly/2ijzsrg.

  • This paper, published 16 years ago, provides an in-depth account of some of the risks related to search engine optimizations, and the biases and harms these can introduce, particularly on the nature of politics.
  • Suggests search engines can be designed to account for these political dimensions, and better correlate with the ideal of the World Wide Web as being a place that is open, accessible and democratic.
  • According to the paper, policy (and not the free market) is the only way to spur change in this field, though the current technical solutions we have introduce further challenges.

Gillespie, Tarleton. “The Relevance of Algorithms.” Media
technologies: Essays on communication, materiality, and society (2014): 167. http://bit.ly/2h6ASEu.

  • This paper suggests that the extended use of algorithms, to the extent that they undercut many aspects of our lives, (Tarleton calls this public relevance algorithms) are fundamentally “producing and certifying knowledge.” In this ability to create a particular “knowledge logic,” algorithms are a primary feature of our information ecosystem.
  • The paper goes on to map 6 dimensions of these public relevance algorithms:
    • Patterns of inclusion
    • Cycles of anticipation
    • The evaluation of relevance
    • The promise of algorithmic objectivity
    • Entanglement with practice
    • The production of calculated publics
  • The paper concludes by highlighting the need for a sociological inquiry into the function, implications and contexts of algorithms, and to “soberly  recognize their flaws and fragilities,” despite the fact that much of their inner workings remain hidden.

Rainie, Lee and Janna Anderson. “Code-Dependent: Pros and Cons of the Algorithm Age.” Pew Research Center. February 8, 2017. http://bit.ly/2kwnvCo.

  • This Pew Research Center report examines the benefits and negative impacts of algorithms as they become more influential in different sectors and aspects of daily life.
  • Through a scan of the research and practice, with a particular focus on the research of experts in the field, Rainie and Anderson identify seven key themes of the burgeoning Algorithm Age:
    • Algorithms will continue to spread everywhere
    • Good things lie ahead
    • Humanity and human judgment are lost when data and predictive modeling become paramount
    • Biases exist in algorithmically-organized systems
    • Algorithmic categorizations deepen divides
    • Unemployment will rise; and
    • The need grows for algorithmic literacy, transparency and oversight

Tufekci, Zeynep. “Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency.” Journal on Telecommunications & High Technology Law 13 (2015): 203. http://bit.ly/1JdvCGo.

  • This paper establishes some of the risks and harms in regard to algorithmic computation, particularly in their filtering abilities as seen in Facebook and other social media algorithms.
  • Suggests that the editorial decisions performed by algorithms can have significant influence on our political and cultural realms, and categorizes the types of harms that algorithms may have on individuals and their society.
  • Takes two case studies–one from the social media coverage of the Ferguson protests, the other on how social media can influence election turnouts–to analyze the influence of algorithms. In doing so, this paper lays out the “tip of the iceberg” in terms of some of the challenges and ethical concerns introduced by algorithmic computing.

Mittelstadt, Brent, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society (2016): 3(2). http://bit.ly/2kWNwL6

  • This paper provides significant background and analysis of the ethical context of algorithmic decision-making. It primarily seeks to map the ethical consequences of algorithms, which have adopted the role of a mediator between data and action within societies.
  • Develops a conceptual map of 6 ethical concerns:
      • Inconclusive Evidence
      • Inscrutable Evidence
      • Misguided Evidence
      • Unfair Outcomes
      • Transformative Effects
    • Traceability
  • The paper then reviews existing literature, which together with the map creates a structure to inform future debate.

Governance

Janssen, Marijn, and George Kuk. “The challenges and limits of big data algorithms in technocratic governance.” Government Information Quarterly 33.3 (2016): 371-377. http://bit.ly/2hMq4z6.

  • In regarding the centrality of algorithms in enforcing policy and extending governance, this paper analyzes the “technocratic governance” that has emerged by the removal of humans from decision making processes, and the inclusion of algorithmic automation.
  • The paper argues that the belief in technocratic governance producing neutral and unbiased results, since their decision-making processes are uninfluenced by human thought processes, is at odds with studies that reveal the inherent discriminatory practices that exist within algorithms.
  • Suggests that algorithms are still bound by the biases of designers and policy-makers, and that accountability is needed to improve the functioning of an algorithm. In order to do so, we must acknowledge the “intersecting dynamics of algorithm as a sociotechnical materiality system involving technologies, data and people using code to shape opinion and make certain actions more likely than others.”

Just, Natascha, and Michael Latzer. “Governance by algorithms: reality construction by algorithmic selection on the Internet.” Media, Culture & Society (2016): 0163443716643157. http://bit.ly/2h6B1Yv.

  • This paper provides a conceptual framework on how to assess the governance potential of algorithms, asking how technology and software governs individuals and societies.
  • By understanding algorithms as institutions, the paper suggests that algorithmic governance puts in place more evidence-based and data-driven systems than traditional governance methods. The result is a form of governance that cares more about effects than causes.
  • The paper concludes by suggesting that algorithmic selection on the Internet tends to shape individuals’ realities and social orders by “increasing individualization, commercialization, inequalities, deterritorialization, and decreasing transparency, controllability, predictability.”

Consumer Finance

Hildebrandt, Mireille. “The dawn of a critical transparency right for the profiling era.” Digital Enlightenment Yearbook 2012 (2012): 41-56. http://bit.ly/2igJcGM.

  • Analyzes the use of consumer profiling by online businesses in order to target marketing and services to their needs. By establishing how this profiling relates to identification, the author also offers some of the threats to democracy and the right of autonomy posed by these profiling algorithms.
  • The paper concludes by suggesting that cross-disciplinary transparency is necessary to design more accountable profiling techniques that can match the extension of “smart environments” that capture ever more data and information from users.

Reddix-Smalls, Brenda. “Credit Scoring and Trade Secrecy: An Algorithmic Quagmire or How the Lack of Transparency in Complex Financial Models Scuttled the Finance Market.” UC Davis Business Law Journal 12 (2011): 87. http://bit.ly/2he52ch

  • Analyzes the creation of predictive risk models in financial markets through algorithmic systems, particularly in regard to credit scoring. It suggests that these models were corrupted in order to maintain a competitive market advantage: “The lack of transparency and the legal environment led to the use of these risk models as predatory credit pricing instruments as opposed to accurate credit scoring predictive instruments.”
  • The paper suggests that without greater transparency of these financial risk model, and greater regulation over their abuse, another financial crisis like that in 2008 is highly likely.

Justice

Aas, Katja Franko. “Sentencing Transparency in the Information Age.” Journal of Scandinavian Studies in Criminology and Crime Prevention 5.1 (2004): 48-61. http://bit.ly/2igGssK.

  • This paper questions the use of predetermined sentencing in the US judicial system through the application of computer technology and sentencing information systems (SIS). By assessing the use of these systems between the English speaking world and Norway, the author suggests that such technological approaches to sentencing attempt to overcome accusations of mistrust, uncertainty and arbitrariness often leveled against the judicial system.
  • However, in their attempt to rebuild trust, such technological solutions can be seen as an attempt to remedy a flawed view of judges by the public. Therefore, the political and social climate must be taken into account when trying to reform these sentencing systems: “The use of the various sentencing technologies is not only, and not primarily, a matter of technological development. It is a matter of a political and cultural climate and the relations of trust in a society.”

Cui, Gregory. “Evidence-Based Sentencing and the Taint of Dangerousness.” Yale Law Journal Forum 125 (2016): 315-315. http://bit.ly/1XLAvhL.

  • This short essay submitted on the Yale Law Journal Forum calls for greater scrutiny of “evidence based sentencing,” where past data is computed and used to predict future criminal behavior of a defendant. The author suggests that these risk models may undermine the Constitution’s prohibition of bills of attainder, and also are unlawful for inflicting punishment without a judicial trial.

Tools & Processes Toward Algorithmic Scrutiny

Ananny, Mike and Crawford, Kate. “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.” New Media & Society. SAGE Publications. 2016. http://bit.ly/2hvKc5x.

  • This paper attempts to critically analyze calls to improve the transparency of algorithms, asking how historically we are able to confront the limitations of the transparency ideal in computing.
  • By establishing “transparency as an ideal” the paper tracks the philosophical and historical lineage of this principle, attempting to establish what laws and provisions were put in place across the world to keep up with and enforce this ideal.
  • The paper goes on to detail the limits of transparency as an ideal, arguing, amongst other things, that it does not necessarily build trust, it privileges a certain function (seeing) over others (say, understanding) and that it has numerous technical limitations.
  • The paper ends by concluding that transparency is an inadequate way to govern algorithmic systems, and that accountability must acknowledge the ability to govern across systems.

Datta, Anupam, Shayak Sen, and Yair Zick. “Algorithmic Transparency via Quantitative Input Influence.Proceedings of 37th IEEE Symposium on Security and Privacy. 2016. http://bit.ly/2hgyLTp.

  • This paper develops what is called a family of Quantitative Input Influence (QII) measures “that capture the degree of influence of inputs on outputs of systems.” The attempt is to theorize a transparency report that is to accompany any algorithmic decisions made, in order to explain any decisions and detect algorithmic discrimination.
  • QII works by breaking “correlations between inputs to allow causal reasoning, and computes the marginal influence of inputs in situations where inputs cannot affect outcomes alone.”
  • Finds that these QII measures are useful in scrutinizing algorithms when “black box” access is available.

Goodman, Bryce, and Seth Flaxman. “European Union regulations on algorithmic decision-making and a right to explanationarXiv preprint arXiv:1606.08813 (2016). http://bit.ly/2h6xpWi.

  • This paper analyzes the implications of a new EU law, to be enacted in 2018, that calls to “restrict automated individual decision-making (that is, algorithms that make decisions based on user level predictors) which ‘significantly affect’ users.” The law will also allow for a “right to explanation” where users can ask for an explanation behind automated decision made about them.
  • The paper, while acknowledging the challenges in implementing such laws, suggests that such regulations can spur computer scientists to create algorithms and decision making systems that are more accountable, can provide explanations, and do not produce discriminatory results.
  • The paper concludes by stating algorithms and computer systems should not aim to be simply efficient, but also fair and accountable. It is optimistic about the ability to put in place interventions to account for and correct discrimination.

Kizilcec, René F. “How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2016. http://bit.ly/2hMjFUR.

  • This paper studies how transparency of algorithms affects our impression of trust by conducting an online field experiment, where participants enrolled in a MOOC a given different explanations for the computer generated grade given in their class.
  • The study found that “Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust.”
  • In conclusion, the study found that a balance of transparency was needed to maintain trust amongst the participants, suggesting that pure transparency of algorithmic processes and results may not correlate with high feelings of trust amongst users.

Kroll, Joshua A., et al. “Accountable Algorithms.” University of Pennsylvania Law Review 165 (2016). http://bit.ly/2i6ipcO.

  • This paper suggests that policy and legal standards need to be updated given the increased use of algorithms to perform tasks and make decisions in arenas that people once did. An “accountability mechanism” is lacking in many of these automated decision making processes.
  • The paper argues that mere transparency through the divulsion of source code is inadequate when confronting questions of accountability. Rather, technology itself provides a key to create algorithms and decision making apparatuses more inline with our existing political and legal frameworks.
  • The paper assesses some computational techniques that may provide possibilities to create accountable software and reform specific cases of automated decisionmaking. For example, diversity and anti-discrimination orders can be built into technology to ensure fidelity to policy choices.

Harnessing the Power of Feedback Loops


Thomas Kalil and David Wilkinson at the White House: “When it comes to strengthening the public sector, the Federal Government looks for new ways to achieve better results for the people we serve. One promising tool that has gained momentum across numerous sectors in the last few years is the adoption of feedback loops.  Systematically collecting data and learning from client and customer insights can benefit organizations across all sectors.

The collection of these valuable insights—and acting on them—remains an underutilized tool.  The people who receive services are the experts on their effectiveness and usefulness.  While the private sector has used customer feedback to improve products and services, the government and nonprofit sectors have often lagged behind.  User experience is a critically important factor in driving positive outcomes.  Getting honest feedback from service recipients can help nonprofit service providers and agencies at all levels of government ensure their work effectively addresses the needs of the people they serve. It’s equally important to close the loop by letting those who provided feedback know that their input was put to good use.

In September, the White House Office of Social Innovation and the White House Office of Science and Technology Policy (OSTP) hosted a workshop at the White House on data-driven feedback loops for the social and public sectors.  The event brought together leaders across the philanthropy, nonprofit, and business sectors who discussed ways to collect and utilize feedback.

The program featured organizations in the nonprofit sector that use feedback to learn what works, what might not be working as well, and how to fix it. One organization, which offers comprehensive employment services to men and women with recent criminal convictions, explained that it has sought feedback from clients on its training program and learned that many people were struggling to find their work site locations and get to the sessions on time. The organization acted on this feedback, shifting their start times and providing maps and clearer directions to their participants.  These two simple changes increased both participation in and satisfaction with their program.

Another organization collected feedback to learn whether factory workers attend and understand trainings on fire evacuation procedures. By collecting and acting on this feedback in Brazil, the organization was able to help a factory reduce fire-drill evacuation time from twelve minutes to two minutes—a life-saving result of seeking feedback.

With results such as these in mind, the White House has emphasized the importance of evidence and data-driven solutions across the Federal Government.  …

USAID works to end extreme poverty in over 100 countries around the world. The Agency has recently changed its operational policy to enable programs to adapt to feedback from the communities in which they work. They did this by removing bureaucratic obstacles and encouraging more flexibility in their program design. For example, if a USAID-funded project designed to increase agricultural productivity is unexpectedly impacted by drought, the original plan may no longer be relevant or effective; the community may want drought-resistant crops instead.  The new, more flexible policy is intended to ensure that such programs can pivot if a community provides feedback that its needs have changed or projects are not succeeding…(More)”

Protecting One’s Own Privacy in a Big Data Economy


Anita L. Allen in the Harvard Law Review Forum: “Big Data is the vast quantities of information amenable to large-scale collection, storage, and analysis. Using such data, companies and researchers can deploy complex algorithms and artificial intelligence technologies to reveal otherwise unascertained patterns, links, behaviors, trends, identities, and practical knowledge. The information that comprises Big Data arises from government and business practices, consumer transactions, and the digital applications sometimes referred to as the “Internet of Things.” Individuals invisibly contribute to Big Data whenever they live digital lifestyles or otherwise participate in the digital economy, such as when they shop with a credit card, get treated at a hospital, apply for a job online, research a topic on Google, or post on Facebook.

Privacy advocates and civil libertarians say Big Data amounts to digital surveillance that potentially results in unwanted personal disclosures, identity theft, and discrimination in contexts such as employment, housing, and financial services. These advocates and activists say typical consumers and internet users do not understand the extent to which their activities generate data that is being collected, analyzed, and put to use for varied governmental and business purposes.

I have argued elsewhere that individuals have a moral obligation to respect not only other people’s privacy but also their own. Here, I wish to comment first on whether the notion that individuals have a moral obligation to protect their own information privacy is rendered utterly implausible by current and likely future Big Data practices; and on whether a conception of an ethical duty to self-help in the Big Data context may be more pragmatically framed as a duty to be part of collective actions encouraging business and government to adopt more robust privacy protections and data security measures….(More)”

Empirical data on the privacy paradox


Benjamin Wittes and Emma Kohse at Brookings: “The contemporary debate about the effects of new technology on individual privacy centers on the idea that privacy is an eroding value. The erosion is ongoing and takes place because of the government and big corporations that collect data on us all: In the consumer space, technology and the companies that create it erode privacy, as consumers trade away their solitude either unknowingly or in exchange for convenience and efficiency.

On January 13, we released a Brookings paper that challenges this idea. Entitled, “The Privacy Paradox II: Measuring the Privacy Benefits of Privacy Threats,” we try to measure the extent to which this focus ignores the significant privacy benefits of the technologies that concern privacy advocates. And we conclude that quantifiable effects in consumer behavior strongly support the reality of these benefits.

In 2015, one of us, writing with Jodie Liu, laid out the basic idea last year in a paper published by Brookings called “The Privacy Paradox: the Privacy Benefits of Privacy Threats.” (The title, incidentally, became the name of Lawfare’s privacy-oriented subsidiary page.) Individuals, we argued, might be more concerned with keeping private information from specific people—friends, neighbors, parents, or even store clerks—than from large, remote corporations, and they might actively prefer to give information remote corporations by way of shielding it from those immediately around them. By failing to associate this concern with the concept of privacy, academic and public debates tends to ignore countervailing privacy benefits associated with privacy threats, and thereby keeps score in a way biased toward the threats side of the ledger.To cite a few examples, an individual may choose to use a Kindle e-reader to read Fifty Shades of Grey precisely because she values the privacy benefit of hiding her book choice from the eyes of people on the bus or the store clerk at the book store, rather than for reasons of mere convenience. This privacy benefit, for many consumers, can outweigh the privacy concern presented by Amazon’s data mining. At the very least, the privacy benefits of the Kindle should enter into the discussion.

To cite a few examples, an individual may choose to use a Kindle e-reader to read Fifty Shades of Grey precisely because she values the privacy benefit of hiding her book choice from the eyes of people on the bus or the store clerk at the book store, rather than for reasons of mere convenience. This privacy benefit, for many consumers, can outweigh the privacy concern presented by Amazon’s data mining. At the very least, the privacy benefits of the Kindle should enter into the discussion.

In this paper, we tried to begin the task for measuring the effect and reasoning that supported the thesis in the “Privacy Paradox” using Google Surveys, an online survey tool….(More)”.