A City Is Not a Computer


 at Places Journal: “…Modernity is good at renewing metaphors, from the city as machine, to the city as organism or ecology, to the city as cyborgian merger of the technological and the organic. Our current paradigm, the city as computer, appeals because it frames the messiness of urban life as programmable and subject to rational order. Anthropologist Hannah Knox explains, “As technical solutions to social problems, information and communications technologies encapsulate the promise of order over disarray … as a path to an emancipatory politics of modernity.” And there are echoes of the pre-modern, too. The computational city draws power from an urban imaginary that goes back millennia, to the city as an apparatus for record-keeping and information management.

We’ve long conceived of our cities as knowledge repositories and data processors, and they’ve always functioned as such. Lewis Mumford observed that when the wandering rulers of the European Middle Ages settled in capital cities, they installed a “regiment of clerks and permanent officials” and established all manner of paperwork and policies (deeds, tax records, passports, fines, regulations), which necessitated a new urban apparatus, the office building, to house its bureaus and bureaucracy. The classic example is the Uffizi (Offices) in Florence, designed by Giorgio Vasari in the mid-16th century, which provided an architectural template copied in cities around the world. “The repetitions and regimentations of the bureaucratic system” — the work of data processing, formatting, and storage — left a “deep mark,” as Mumford put it, on the early modern city.

Yet the city’s informational role began even earlier than that. Writing and urbanization developed concurrently in the ancient world, and those early scripts — on clay tablets, mud-brick walls, and landforms of various types — were used to record transactions, mark territory, celebrate ritual, and embed contextual information in landscape.  Mumford described the city as a fundamentally communicative space, rich in information:

Through its concentration of physical and cultural power, the city heightened the tempo of human intercourse and translated its products into forms that could be stored and reproduced. Through its monuments, written records, and orderly habits of association, the city enlarged the scope of all human activities, extending them backwards and forwards in time. By means of its storage facilities (buildings, vaults, archives, monuments, tablets, books), the city became capable of transmitting a complex culture from generation to generation, for it marshaled together not only the physical means but the human agents needed to pass on and enlarge this heritage. That remains the greatest of the city’s gifts. As compared with the complex human order of the city, our present ingenious electronic mechanisms for storing and transmitting information are crude and limited.

Mumford’s city is an assemblage of media forms (vaults, archives, monuments, physical and electronic records, oral histories, lived cultural heritage); agents (architectures, institutions, media technologies, people); and functions (storage, processing, transmission, reproduction, contextualization, operationalization). It is a large, complex, and varied epistemological and bureaucratic apparatus. It is an information processor, to be sure, but it is also more than that.

Were he alive today, Mumford would reject the creeping notion that the city is simply the internet writ large. He would remind us that the processes of city-making are more complicated than writing parameters for rapid spatial optimization. He would inject history and happenstance. The city is not a computer. This seems an obvious truth, but it is being challenged now (again) by technologists (and political actors) who speak as if they could reduce urban planning to algorithms.

Why should we care about debunking obviously false metaphors? It matters because the metaphors give rise to technical models, which inform design processes, which in turn shape knowledges and politics, not to mention material cities. The sites and systems where we locate the city’s informational functions — the places where we see information-processing, storage, and transmission “happening” in the urban landscape — shape larger understandings of urban intelligence….(More)”

Toward a User-Centered Social Sector


Tris Lumley at Stanford Social Innovation Review: “Over the last three years, a number of trends have crystallized that I believe herald the promise of a new phase—perhaps even a new paradigm—for the social sector. I want to explore three of the most exciting, and sketch out where I believe they might take us and why we’d all do well to get involved.

  • The rise of feedback
  • New forms of collaboration
  • Disruption through technology

Taken individually, these three themes are hugely significant in their potential impact on the work of nonprofits and those that invest in them. But viewed together, as interwoven threads, I believe they have the potential to transform both how we work and the underlying fundamental incentives and structure of the social sector.

The rise of feedback

The nonprofit sector is built on a deep and rich history of community engagement. Yet, in a funding market that incentivizes accountability to funders, this strong tradition of listening, engagement, and ownership by primary constituents—the people and communities nonprofits exist to serve—has sometimes faded. Opportunities for funding can drive strategies. Practitioner experience and research evidence can shape program designs. Engagement with service users can become tokenistic, or shallow….

In recognition of this growing momentum, Keystone Accountability and New Philanthropy Capital (NPC) published a paper in 2016 to explore the relationship between impact measurement and user voice. It is our shared belief that many of the recent criticisms of the impact movement—such as impact reporting being used primarily for fundraising rather than improving programs—would be addressed if impact evidence and user voice were seen as two sides of the same coin, and we more routinely sought to synthesize our understanding of nonprofits’ programs from both aspects at once…

New forms of collaboration

As recent critiques of collective impact have pointed out, the social sector has a long history of collaboration. Yet it has not always been the default operating model of nonprofits or their funders. The fragmented nature of the social sector today exposes an urgent imperative for greater focus on collaboration….

Yet the need for greater collaboration and new forms to incentivize and enable it is increasing. Deepening austerity policies, the shrinking of the state in many countries, and the sheer scale of the social issues we face have driven the “demand” side of collaboration. The collective impact movement has certainly been one driver of momentum on the “supply” side, and a number of other forms of collaboration are emerging.

The Young People’s Foundation model, developed in the UK by the John Lyons Charity, is one response to deepening cuts in nonprofit funding. Young People’s Foundations are new organizations that serve three purposes for nonprofits working with young people in the local area—creating a network, leading on collaborative funding bids and contracting processes, and sharing assets across the network.

Elsewhere, philanthropic donors and foundations are increasingly exploring collaboration in practical terms, through pooled grant funds that provide individual donors unrivalled leverage, and that allow groups of funders to benefit from each other’s strengths through coordination and shared strategies. The Dasra Girl Alliance in India is an example of a pooled fund that brings together philanthropic donors and institutional development funders, and fosters collaboration between the nonprofits it supports….

Disruption through technology

Technology might appear an incongruous companion to feedback and collaboration, which are both very human in nature, yet it’s likely to transform our sector….(More)”

‘Collective intelligence’ is not necessarily present in virtual groups


Jordan B. Barlow and Alan R. Dennis at LSE: “Do groups of smart people perform better than groups of less intelligent people?

Research published in Science magazine in 2010 reported that groups, like individuals, have a certain level of “collective intelligence,” such that some groups perform consistently well across many different types of tasks, while other groups perform consistently poorly. Collective intelligence is similar to individual intelligence, but at the group level.

Interestingly, the Science study found that collective intelligence was not related to the individual intelligence of group members; groups of people with higher intelligence did not perform better than groups with lower intelligence. Instead, the study found that high performing teams had members with higher social sensitivity – the ability to read the emotions of others using visual facial cues.

Social sensitivity is important when we sit across a table from each other. But what about online, when we exchange emails or text messages? Does social sensitivity matter when I can’t see your face?

We examined the collective intelligence in an online environment in which groups used text-based computer-mediated communication. We followed the same procedures as the original Science study, which used the approach typically used to measure individual intelligence. In individual intelligence tests, a person completes several small “tasks” or problems. An analysis of task scores typically demonstrates that task scores are correlated, meaning that if a person does well on one problem, it is likely that they did well on other problems….

The results were not what we expected. The correlations between our groups’ performance scores were either not statistically significant or significantly negative, as shown in Table 1. The average correlation between any two tasks was -0.05, indicating that performance on one task was not correlated with performance on other tasks. In other words, groups who performed well on one of the tasks were unlikely to perform well on the other tasks…

Our findings challenge the conclusion reported in Science that groups have a general collective intelligence analogous to individual intelligence. Our study shows that no collective intelligence factor emerged when groups used a popular commercial text-based online tool. That is, when using tools with limited visual cues, groups that performed well on one task were no more likely to perform well on a different task. Thus the “collective intelligence” factor related to social sensitivity that was reported in Science is not collective intelligence; it is instead a factor associated with the ability to work well using face-to-face communication, and does not transcend media….(More)”

Crowdsourcing to Be the Future for Medical Research


PCORI: “Crowdsourcing isn’t just a quick way to get things done on the Internet. When used right, it can accelerate medical research and improve global cardiovascular health, according to a new best-practices “playbook” released by the American Heart Association (AHA) and the Patient-Centered Outcomes Research Institute (PCORI).

“The benefits of crowdsourcing are substantial,” said Rose Marie Robertson, MD, Chief Science Officer of the AHA, who took part in writing the guide. “You can get information from new perspectives and highly innovative ideas that might well not have occurred to you.”

Crowdsourcing Medical Research Priorities: A Guide for Funding Agencies is the work of Precision Medicine Advances using Nationally Crowdsourced Comparative Effectiveness Research (PRANCCER), a joint initiative launched in 2015 by the AHA and PCORI.

“Acknowledging the power of open, multidisciplinary research to drive medical progress, AHA and PCORI turned to the rapidly evolving methodology of crowdsourcing to find out what patients, clinicians, and researchers consider the most urgent priorities in cardiovascular medicine and to shape the direction and design of research targeting those priorities,” according to the guide.

“Engaging patients and other healthcare decision makers in identifying research needs and guiding studies is a hallmark of our patient-centered approach to research, and crowdsourcing offers great potential to catalyze such engagement,” said PCORI Executive Director Joe V. Selby, MD. “We hope the input we’ve received will help us develop new research funding opportunities that will lead to improved care for people with cardiovascular conditions.”

The playbook offers more than a dozen recommendations on the ins and outs of medical crowdsourcing. It stresses the need to have crystal clear objectives and questions, whether you’re dealing with patients, researchers, or clinicians. … (More)”

Participatory budgeting in Indonesia: past, present and future


IDS Practice Paper by Francesca Feruglio and Ahmad Rifai: “In 2015, Yayasan Kota Kita (Our City Foundation), an Indonesian civil society organisation, applied to Making All Voices Count for a practitioner research and learning grant.

Kota Kita is an organisation of governance practitioners who focus on urban planning and citizen participation in the design and development of cities. Following several years of experience with participatory budgeting in Solo city, their research set out to examine participatory budgeting processes in six Indonesian cities, to inform their work – and the work of others – strengthening citizen participation in urban governance.

Their research looked at:

  • the current status of participatory budgeting in six Indonesian cities
  • the barriers and enablers to implementing participatory budgeting
  • how government and CSOs can help make participatory budgeting more transparent, inclusive and impactful.This practice paper describes Kota Kita and its work in more detail, and reflects on the history and evolution of participatory budgeting in Indonesia. In doing so, it contextualises some of the findings of the research, and discusses their implications.

    Key Themes in this Paper

  • What are the risks and opportunities of institutionalising participation?
  • How do access to information and use of new technologies have an impact onparticipation in budget planning processes?
  • What does it take for participatory budgeting to be an empowering process for citizens?
  • How can participatory budgeting include hard-to-reach citizens and accommodate different citizens’ needs? …(More)”.

It takes more than social media to make a social movement


Hayley Tsukayama in the Washington Post: “President Trump may have used the power of social media to make his way into the White House, but now social media networks are showing that muscle can work for his opposition, too. Last week, more than 1 million marchers went to Washington and cities around the country — sparked by a Facebook post from one woman with no history of activism. This weekend, the Internet exploded again in discussion about Trump’s travel suspension order, and many used social media to get together and protest the decision.

Twitter said that more than 25 million tweets were sent about the order — as compared with 12 million about Trump’s inauguration. Facebook said that its users generated 151 million “likes, posts, comments and shares” related to the ban, less than the 208 million interactions generated about the inauguration. The companies didn’t reveal how many of those were aimed at organizing, but the social media calls to get people to protest are a testament to the power of these platforms to move people.

The real questionhowever, is whether this burgeoning new movement can avoid the fate of many so others kick-started by the power of social networks — only to find that it’s much harder to make political change than to make a popular hashtag….

Zeynep Tufekci, an associate professor at the University of North Carolina at Chapel Hill who has written a forthcoming book on the power and fragility of movements borne of social media, found in her research that the very ability for these movements to scale quickly is, in part, why they also can fall apart so quickly compared with traditional grass-roots campaigns….

Now, organizers can bypass the time it takes to build up the infrastructure for a massive march and all the publicity that comes with it. But that also means their high-profile movements skip some crucial organizing steps.

“Digitally networked movements look like the old movements. But by the time the civil rights movement had such a large march, they’d been working on [the issues] for 10 years — if not more,” Tufekci said. The months or even years spent discussing logistics, leafleting and building a coalition, she said, were crucial to the success of the civil rights movements. Other successful efforts, such as the Human Rights Campaign’s efforts to end the “don’t ask, don’t tell” policy against allowing gay people to serve openly in the military were also rooted in organization structures that had been developing and refining their demands for years to present a unified front. Movements organized over social networks often have more trouble jelling, she said, particularly if different factions air their differences on Facebook and Twitter, drawing attention to fractures in a movement….(More).”

Selected Readings on Algorithmic Scrutiny


By Prianka Srinivasan, Andrew Young and Stefaan Verhulst

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of algorithmic scrutiny was originally published in 2017.

Introduction

From government policy, to criminal justice, to our news feeds; to business and consumer practices, the processes that shape our lives both online and off are more and more driven by data and the complex algorithms used to form rulings or predictions. In most cases, these algorithms have created “black boxes” of decision making, where models remain inscrutable and inaccessible. It should therefore come as no surprise that several observers and policymakers are calling for more scrutiny of how algorithms are designed and work, particularly when their outcomes convey intrinsic biases or defy existing ethical standards.

While the concern about values in technology design is not new, recent developments in machine learning, artificial intelligence and the Internet of Things have increased the urgency to establish processes and develop tools to scrutinize algorithms.

In what follows, we have curated several readings covering the impact of algorithms on:

  • Information Intermediaries;
  • Governance
  • Finance
  • Justice

In addition we have selected a few readings that provide insight on possible processes and tools to establish algorithmic scrutiny.

Selected Reading List

Information Intermediaries

Governance

Consumer Finance

Justice

Tools & Process Toward Algorithmic Scrutiny

Annotated Selected Reading List

Information Intermediaries

Diakopoulos, Nicholas. “Algorithmic accountability: Journalistic investigation of computational power structures.” Digital Journalism 3.3 (2015): 398-415. http://bit.ly/.

  • This paper attempts to substantiate the notion of accountability for algorithms, particularly how they relate to media and journalism. It puts forward the notion of “algorithmic power,” analyzing the framework of influence such systems exert, and also introduces some of the challenges in the practice of algorithmic accountability, particularly for computational journalists.
  • Offers a basis for how algorithms can be analyzed, built in terms of the types of decisions algorithms make in prioritizing, classifying, associating, and filtering information.

Diakopoulos, Nicholas, and Michael Koliska. “Algorithmic transparency in the news media.” Digital Journalism (2016): 1-20. http://bit.ly/2hMvXdE.

  • This paper analyzes the increased use of “computational journalism,” and argues that though transparency remains a key tenet of journalism, the use of algorithms in gathering, producing and disseminating news undermines this principle.
  • It first analyzes what the ethical principle of transparency means to journalists and the media. It then highlights the findings from a focus-group study, where 50 participants from the news media and academia were invited to discuss three different case studies related to the use of algorithms in journalism.
  • They find two key barriers to algorithmic transparency in the media: “(1) a lack of business incentives for disclosure, and (2) the concern of overwhelming end-users with too much information.”
  • The study also finds a variety of opportunities for transparency across the “data, model, inference, and interface” components of an algorithmic system.

Napoli, Philip M. “The algorithm as institution: Toward a theoretical framework for automated media production and consumption.” Fordham University Schools of Business Research Paper (2013). http://bit.ly/2hKBHqo

  • This paper puts forward an analytical framework to discuss the algorithmic content creation of media and journalism in an attempt to “close the gap” on theory related to automated media production.
  • By borrowing concepts from institutional theory, the paper finds that algorithms are distinct forms of media institutions, and the cultural and political implications of this interpretation.
  • It urges further study in the field of “media sociology” to further unpack the influence of algorithms, and their role in institutionalizing certain norms, cultures and ways of thinking.

Introna, Lucas D., and Helen Nissenbaum. “Shaping the Web: Why the politics of search engines matters.” The Information Society 16.3 (2000): 169-185. http://bit.ly/2ijzsrg.

  • This paper, published 16 years ago, provides an in-depth account of some of the risks related to search engine optimizations, and the biases and harms these can introduce, particularly on the nature of politics.
  • Suggests search engines can be designed to account for these political dimensions, and better correlate with the ideal of the World Wide Web as being a place that is open, accessible and democratic.
  • According to the paper, policy (and not the free market) is the only way to spur change in this field, though the current technical solutions we have introduce further challenges.

Gillespie, Tarleton. “The Relevance of Algorithms.” Media
technologies: Essays on communication, materiality, and society (2014): 167. http://bit.ly/2h6ASEu.

  • This paper suggests that the extended use of algorithms, to the extent that they undercut many aspects of our lives, (Tarleton calls this public relevance algorithms) are fundamentally “producing and certifying knowledge.” In this ability to create a particular “knowledge logic,” algorithms are a primary feature of our information ecosystem.
  • The paper goes on to map 6 dimensions of these public relevance algorithms:
    • Patterns of inclusion
    • Cycles of anticipation
    • The evaluation of relevance
    • The promise of algorithmic objectivity
    • Entanglement with practice
    • The production of calculated publics
  • The paper concludes by highlighting the need for a sociological inquiry into the function, implications and contexts of algorithms, and to “soberly  recognize their flaws and fragilities,” despite the fact that much of their inner workings remain hidden.

Rainie, Lee and Janna Anderson. “Code-Dependent: Pros and Cons of the Algorithm Age.” Pew Research Center. February 8, 2017. http://bit.ly/2kwnvCo.

  • This Pew Research Center report examines the benefits and negative impacts of algorithms as they become more influential in different sectors and aspects of daily life.
  • Through a scan of the research and practice, with a particular focus on the research of experts in the field, Rainie and Anderson identify seven key themes of the burgeoning Algorithm Age:
    • Algorithms will continue to spread everywhere
    • Good things lie ahead
    • Humanity and human judgment are lost when data and predictive modeling become paramount
    • Biases exist in algorithmically-organized systems
    • Algorithmic categorizations deepen divides
    • Unemployment will rise; and
    • The need grows for algorithmic literacy, transparency and oversight

Tufekci, Zeynep. “Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency.” Journal on Telecommunications & High Technology Law 13 (2015): 203. http://bit.ly/1JdvCGo.

  • This paper establishes some of the risks and harms in regard to algorithmic computation, particularly in their filtering abilities as seen in Facebook and other social media algorithms.
  • Suggests that the editorial decisions performed by algorithms can have significant influence on our political and cultural realms, and categorizes the types of harms that algorithms may have on individuals and their society.
  • Takes two case studies–one from the social media coverage of the Ferguson protests, the other on how social media can influence election turnouts–to analyze the influence of algorithms. In doing so, this paper lays out the “tip of the iceberg” in terms of some of the challenges and ethical concerns introduced by algorithmic computing.

Mittelstadt, Brent, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society (2016): 3(2). http://bit.ly/2kWNwL6

  • This paper provides significant background and analysis of the ethical context of algorithmic decision-making. It primarily seeks to map the ethical consequences of algorithms, which have adopted the role of a mediator between data and action within societies.
  • Develops a conceptual map of 6 ethical concerns:
      • Inconclusive Evidence
      • Inscrutable Evidence
      • Misguided Evidence
      • Unfair Outcomes
      • Transformative Effects
    • Traceability
  • The paper then reviews existing literature, which together with the map creates a structure to inform future debate.

Governance

Janssen, Marijn, and George Kuk. “The challenges and limits of big data algorithms in technocratic governance.” Government Information Quarterly 33.3 (2016): 371-377. http://bit.ly/2hMq4z6.

  • In regarding the centrality of algorithms in enforcing policy and extending governance, this paper analyzes the “technocratic governance” that has emerged by the removal of humans from decision making processes, and the inclusion of algorithmic automation.
  • The paper argues that the belief in technocratic governance producing neutral and unbiased results, since their decision-making processes are uninfluenced by human thought processes, is at odds with studies that reveal the inherent discriminatory practices that exist within algorithms.
  • Suggests that algorithms are still bound by the biases of designers and policy-makers, and that accountability is needed to improve the functioning of an algorithm. In order to do so, we must acknowledge the “intersecting dynamics of algorithm as a sociotechnical materiality system involving technologies, data and people using code to shape opinion and make certain actions more likely than others.”

Just, Natascha, and Michael Latzer. “Governance by algorithms: reality construction by algorithmic selection on the Internet.” Media, Culture & Society (2016): 0163443716643157. http://bit.ly/2h6B1Yv.

  • This paper provides a conceptual framework on how to assess the governance potential of algorithms, asking how technology and software governs individuals and societies.
  • By understanding algorithms as institutions, the paper suggests that algorithmic governance puts in place more evidence-based and data-driven systems than traditional governance methods. The result is a form of governance that cares more about effects than causes.
  • The paper concludes by suggesting that algorithmic selection on the Internet tends to shape individuals’ realities and social orders by “increasing individualization, commercialization, inequalities, deterritorialization, and decreasing transparency, controllability, predictability.”

Consumer Finance

Hildebrandt, Mireille. “The dawn of a critical transparency right for the profiling era.” Digital Enlightenment Yearbook 2012 (2012): 41-56. http://bit.ly/2igJcGM.

  • Analyzes the use of consumer profiling by online businesses in order to target marketing and services to their needs. By establishing how this profiling relates to identification, the author also offers some of the threats to democracy and the right of autonomy posed by these profiling algorithms.
  • The paper concludes by suggesting that cross-disciplinary transparency is necessary to design more accountable profiling techniques that can match the extension of “smart environments” that capture ever more data and information from users.

Reddix-Smalls, Brenda. “Credit Scoring and Trade Secrecy: An Algorithmic Quagmire or How the Lack of Transparency in Complex Financial Models Scuttled the Finance Market.” UC Davis Business Law Journal 12 (2011): 87. http://bit.ly/2he52ch

  • Analyzes the creation of predictive risk models in financial markets through algorithmic systems, particularly in regard to credit scoring. It suggests that these models were corrupted in order to maintain a competitive market advantage: “The lack of transparency and the legal environment led to the use of these risk models as predatory credit pricing instruments as opposed to accurate credit scoring predictive instruments.”
  • The paper suggests that without greater transparency of these financial risk model, and greater regulation over their abuse, another financial crisis like that in 2008 is highly likely.

Justice

Aas, Katja Franko. “Sentencing Transparency in the Information Age.” Journal of Scandinavian Studies in Criminology and Crime Prevention 5.1 (2004): 48-61. http://bit.ly/2igGssK.

  • This paper questions the use of predetermined sentencing in the US judicial system through the application of computer technology and sentencing information systems (SIS). By assessing the use of these systems between the English speaking world and Norway, the author suggests that such technological approaches to sentencing attempt to overcome accusations of mistrust, uncertainty and arbitrariness often leveled against the judicial system.
  • However, in their attempt to rebuild trust, such technological solutions can be seen as an attempt to remedy a flawed view of judges by the public. Therefore, the political and social climate must be taken into account when trying to reform these sentencing systems: “The use of the various sentencing technologies is not only, and not primarily, a matter of technological development. It is a matter of a political and cultural climate and the relations of trust in a society.”

Cui, Gregory. “Evidence-Based Sentencing and the Taint of Dangerousness.” Yale Law Journal Forum 125 (2016): 315-315. http://bit.ly/1XLAvhL.

  • This short essay submitted on the Yale Law Journal Forum calls for greater scrutiny of “evidence based sentencing,” where past data is computed and used to predict future criminal behavior of a defendant. The author suggests that these risk models may undermine the Constitution’s prohibition of bills of attainder, and also are unlawful for inflicting punishment without a judicial trial.

Tools & Processes Toward Algorithmic Scrutiny

Ananny, Mike and Crawford, Kate. “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.” New Media & Society. SAGE Publications. 2016. http://bit.ly/2hvKc5x.

  • This paper attempts to critically analyze calls to improve the transparency of algorithms, asking how historically we are able to confront the limitations of the transparency ideal in computing.
  • By establishing “transparency as an ideal” the paper tracks the philosophical and historical lineage of this principle, attempting to establish what laws and provisions were put in place across the world to keep up with and enforce this ideal.
  • The paper goes on to detail the limits of transparency as an ideal, arguing, amongst other things, that it does not necessarily build trust, it privileges a certain function (seeing) over others (say, understanding) and that it has numerous technical limitations.
  • The paper ends by concluding that transparency is an inadequate way to govern algorithmic systems, and that accountability must acknowledge the ability to govern across systems.

Datta, Anupam, Shayak Sen, and Yair Zick. “Algorithmic Transparency via Quantitative Input Influence.Proceedings of 37th IEEE Symposium on Security and Privacy. 2016. http://bit.ly/2hgyLTp.

  • This paper develops what is called a family of Quantitative Input Influence (QII) measures “that capture the degree of influence of inputs on outputs of systems.” The attempt is to theorize a transparency report that is to accompany any algorithmic decisions made, in order to explain any decisions and detect algorithmic discrimination.
  • QII works by breaking “correlations between inputs to allow causal reasoning, and computes the marginal influence of inputs in situations where inputs cannot affect outcomes alone.”
  • Finds that these QII measures are useful in scrutinizing algorithms when “black box” access is available.

Goodman, Bryce, and Seth Flaxman. “European Union regulations on algorithmic decision-making and a right to explanationarXiv preprint arXiv:1606.08813 (2016). http://bit.ly/2h6xpWi.

  • This paper analyzes the implications of a new EU law, to be enacted in 2018, that calls to “restrict automated individual decision-making (that is, algorithms that make decisions based on user level predictors) which ‘significantly affect’ users.” The law will also allow for a “right to explanation” where users can ask for an explanation behind automated decision made about them.
  • The paper, while acknowledging the challenges in implementing such laws, suggests that such regulations can spur computer scientists to create algorithms and decision making systems that are more accountable, can provide explanations, and do not produce discriminatory results.
  • The paper concludes by stating algorithms and computer systems should not aim to be simply efficient, but also fair and accountable. It is optimistic about the ability to put in place interventions to account for and correct discrimination.

Kizilcec, René F. “How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2016. http://bit.ly/2hMjFUR.

  • This paper studies how transparency of algorithms affects our impression of trust by conducting an online field experiment, where participants enrolled in a MOOC a given different explanations for the computer generated grade given in their class.
  • The study found that “Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust.”
  • In conclusion, the study found that a balance of transparency was needed to maintain trust amongst the participants, suggesting that pure transparency of algorithmic processes and results may not correlate with high feelings of trust amongst users.

Kroll, Joshua A., et al. “Accountable Algorithms.” University of Pennsylvania Law Review 165 (2016). http://bit.ly/2i6ipcO.

  • This paper suggests that policy and legal standards need to be updated given the increased use of algorithms to perform tasks and make decisions in arenas that people once did. An “accountability mechanism” is lacking in many of these automated decision making processes.
  • The paper argues that mere transparency through the divulsion of source code is inadequate when confronting questions of accountability. Rather, technology itself provides a key to create algorithms and decision making apparatuses more inline with our existing political and legal frameworks.
  • The paper assesses some computational techniques that may provide possibilities to create accountable software and reform specific cases of automated decisionmaking. For example, diversity and anti-discrimination orders can be built into technology to ensure fidelity to policy choices.

Information for accountability: Transparency and citizen engagement for improved service delivery in education systems


Lindsay Read and Tamar Manuelyan Atinc at Brookings: “There is a wide consensus among policymakers and practitioners that while access to education has improved significantly for many children in low- and middle-income countries, learning has not kept pace. A large amount of research that has attempted to pinpoint the reasons behind this quality deficit in education has revealed that providing extra resources such as textbooks, learning materials, and infrastructure is largely ineffective in improving learning outcomes at the system level without accompanying changes to the underlying structures of education service delivery and associated systems of accountability.

Information is a key building block of a wide range of strategies that attempts to tackle weaknesses in service delivery and accountability at the school level, even where political systems disappoint at the national level. The dissemination of more and better quality information is expected to empower parents and communities to make better decisions in terms of their children’s schooling and to put pressure on school administrators and public officials for making changes that improve learning and learning environments. This theory of change underpins both social accountability and open data initiatives, which are designed to use information to enhance accountability and thereby influence education delivery.

This report seeks to extract insight into the nuanced relationship between information and accountability, drawing upon a vast literature on bottom-up efforts to improve service delivery, increase citizen engagement, and promote transparency, as well as case studies in Australia, Moldova, Pakistan, and the Philippines. In an effort to clarify processes and mechanisms behind information-based reforms in the education sector, this report also categorizes and evaluates recent impact evaluations according to the intensity of interventions and their target change agents—parents, teachers, school principals, and local officials. The idea here is not just to help clarify what works but why reforms work (or do not)….(More)”

How States Engage in Evidence-Based Policymaking


The Pew Charitable Trusts: “Evidence-based policymaking is the systematic use of findings from program evaluations and outcome analyses (“evidence”) to guide government policy and funding decisions. By focusing limited resources on public services and programs that have been shown to produce positive results, governments can expand their investments in more cost-effective options, consider reducing funding for ineffective programs, and improve the outcomes of services funded by taxpayer dollars.

While the term “evidence-based policymaking” is growing in popularity in state capitols, there is limited information about the extent to which states employ the approach. This report seeks to address this gap by: 1) identifying six distinct actions that states can use to incorporate research findings into their decisions, 2) assessing the prevalence and level of these actions within four human service policy areas across 50 states and the District of Columbia, and 3) categorizing each state based on the final results….

Although many states are embracing evidence-based policymaking, leaders often face challenges in embedding this approach into the decision-making process of state and local governments. This report identifies how staff and stakeholder education, strong data infrastructure, and analytical and technical capacity can help leaders build and sustain support for this work and achieve better outcomes for their communities.

State policymaking

Mass Observation: The amazing 80-year experiment to record our daily lives


William Cook at BBC Arts: “Eighty years ago, on 30th January 1937, the New Statesman published a letter which launched the largest (and strangest) writers’ group in British literary history.

An anthropologist called Tom Harrisson, a journalist called Charles Madge and a filmmaker called Humphrey Jennings wrote to the magazine asking for volunteers to take part in a new project called Mass Observation. Over a thousand readers responded, offering their services. Remarkably, this ‘scientific study of human social behaviour’ is still going strong today.

Mass Observation was the product of a growing interest in the social sciences, and a growing belief that the mass media wasn’t accurately reflecting the lives of so-called ordinary people. Instead of entrusting news gathering to jobbing journalists, who were under pressure to provide the stories their editors and proprietors wanted, Mass Observation recruited a secret army of amateur reporters, to track the habits and opinions of ‘the man in the street.’

Ironically, the three founders of this egalitarian movement were all extremely well-to-do. They’d all been to public schools and Oxbridge, but this was the ‘Age of Anxiety’, when capitalism was in chaos and dangerous demagogues were on the rise (plus ça change…).

For these idealistic public schoolboys, socialism was the answer, and Mass Observation was the future. By finding out what ‘ordinary’ folk were really doing, and really thinking, they would forge a new society, more attuned to the needs of the common man.

Mass Observation selected 500 citizen journalists, and gave them regular ‘directives’ to report back on virtually every aspect of their daily lives. They were guaranteed anonymity, which gave them enormous freedom. People opened up about themselves (and their peers) to an unprecedented degree.

Even though they were all unpaid, correspondents devoted a great deal of time to this endeavour – writing at great length, in great detail, over many years. As well as its academic value, Mass Observation proved that autobiography is not the sole preserve of the professional writer. For all of us, the urge to record and reflect upon our lives is a basic human need.

The Second World War was the perfect forum for this vast collective enterprise. Mass Observation became a national diary of life on the home front. For historians, the value of such uncensored revelations is enormous. These intimate accounts of air raids and rationing are far more revealing and evocative than the jolly state-sanctioned reportage of the war years.

After the war, Mass Observation became more commercial, supplying data for market research, and during the 1960s this extraordinary experiment gradually wound down. It was rescued from extinction by the historian Asa Briggs….

The founders of Mass Observation were horrified by what they called “the revival of racial superstition.” Hitler, Franco and Mussolini were in the forefront of their minds. “We are all in danger of extinction from such outbursts of atavism,” they wrote, in 1937. “We look to science to help us, only to find that science is too busy forging new weapons of mass destruction.”

For its founders, Mass Observation was a new science which would build a better future. For its countless correspondents, however, it became something more than that – not merely a social science, but a communal work of art….(More)”.