Book edited by C. Certomà, M. Dyer, L. Pocatilu and F. Rizzi: “… analyzes the ongoing transformation in the “smart city” paradigm and explores the possibilities that technological innovations offer for the effective involvement of ordinary citizens in collective knowledge production and decision-making processes within the context of urban planning and management. To so, it pursues an interdisciplinary approach, with contributions from a range of experts including city managers, public policy makers, Information and Communication Technology (ICT) specialists, and researchers. The first two parts of the book focus on the generation and use of data by citizens, with or without institutional support, and the professional management of data in city governance, highlighting the social connectivity and livability aspects essential to vibrant and healthy urban environments. In turn, the third part presents inspiring case studies that illustrate how data-driven solutions can empower people and improve urban environments, including enhanced sustainability. The book will appeal to all those who are interested in the required transformation in the planning, management, and operations of data-rich cities and the ways in which such cities can employ the latest technologies to use data efficiently, promoting data access, data sharing, and interoperability….(More)”.
Toward a User-Centered Social Sector
Tris Lumley at Stanford Social Innovation Review: “Over the last three years, a number of trends have crystallized that I believe herald the promise of a new phase—perhaps even a new paradigm—for the social sector. I want to explore three of the most exciting, and sketch out where I believe they might take us and why we’d all do well to get involved.
- The rise of feedback
- New forms of collaboration
- Disruption through technology
Taken individually, these three themes are hugely significant in their potential impact on the work of nonprofits and those that invest in them. But viewed together, as interwoven threads, I believe they have the potential to transform both how we work and the underlying fundamental incentives and structure of the social sector.
The rise of feedback
The nonprofit sector is built on a deep and rich history of community engagement. Yet, in a funding market that incentivizes accountability to funders, this strong tradition of listening, engagement, and ownership by primary constituents—the people and communities nonprofits exist to serve—has sometimes faded. Opportunities for funding can drive strategies. Practitioner experience and research evidence can shape program designs. Engagement with service users can become tokenistic, or shallow….
In recognition of this growing momentum, Keystone Accountability and New Philanthropy Capital (NPC) published a paper in 2016 to explore the relationship between impact measurement and user voice. It is our shared belief that many of the recent criticisms of the impact movement—such as impact reporting being used primarily for fundraising rather than improving programs—would be addressed if impact evidence and user voice were seen as two sides of the same coin, and we more routinely sought to synthesize our understanding of nonprofits’ programs from both aspects at once…
New forms of collaboration
As recent critiques of collective impact have pointed out, the social sector has a long history of collaboration. Yet it has not always been the default operating model of nonprofits or their funders. The fragmented nature of the social sector today exposes an urgent imperative for greater focus on collaboration….
Yet the need for greater collaboration and new forms to incentivize and enable it is increasing. Deepening austerity policies, the shrinking of the state in many countries, and the sheer scale of the social issues we face have driven the “demand” side of collaboration. The collective impact movement has certainly been one driver of momentum on the “supply” side, and a number of other forms of collaboration are emerging.
The Young People’s Foundation model, developed in the UK by the John Lyons Charity, is one response to deepening cuts in nonprofit funding. Young People’s Foundations are new organizations that serve three purposes for nonprofits working with young people in the local area—creating a network, leading on collaborative funding bids and contracting processes, and sharing assets across the network.
Elsewhere, philanthropic donors and foundations are increasingly exploring collaboration in practical terms, through pooled grant funds that provide individual donors unrivalled leverage, and that allow groups of funders to benefit from each other’s strengths through coordination and shared strategies. The Dasra Girl Alliance in India is an example of a pooled fund that brings together philanthropic donors and institutional development funders, and fosters collaboration between the nonprofits it supports….
Disruption through technology
Technology might appear an incongruous companion to feedback and collaboration, which are both very human in nature, yet it’s likely to transform our sector….(More)”
Data ideologies of an interested public: A study of grassroots open government data intermediaries
Andrew Schrock and Gwen Shaffer in Big Data & Society: “Government officials claim open data can improve internal and external communication and collaboration. These promises hinge on “data intermediaries”: extra-institutional actors that obtain, use, and translate data for the public. However, we know little about why these individuals might regard open data as a site of civic participation. In response, we draw on Ilana Gershon to conceptualize culturally situated and socially constructed perspectives on data, or “data ideologies.” This study employs mixed methodologies to examine why members of the public hold particular data ideologies and how they vary. In late 2015 the authors engaged the public through a commission in a diverse city of approximately 500,000. Qualitative data was collected from three public focus groups with residents. Simultaneously, we obtained quantitative data from surveys. Participants’ data ideologies varied based on how they perceived data to be useful for collaboration, tasks, and translations. Bucking the “geek” stereotype, only a minority of those surveyed (20%) were professional software developers or engineers. Although only a nascent movement, we argue open data intermediaries have important roles to play in a new political landscape….(More)”
Making the Case for Open Contracting in Healthcare Procurement
Transparency International “…new report “Making the Case for Open Contracting in Healthcare Procurement” examines the utility of open contracting in healthcare procurement. The process relies on governments to disclose procurement information to businesses and civil society improves stakeholders’ understanding of procurement processes increasing the integrity, fairness and efficiency of public contracting.
In several countries, including Honduras, Ukraine and Nigeria, corruption was significantly reduced throughout the healthcare procurement process following the implementation of open contracting, according to the report. Click here to download the report”
Participatory budgeting in Indonesia: past, present and future
IDS Practice Paper by Francesca Feruglio and Ahmad Rifai: “In 2015, Yayasan Kota Kita (Our City Foundation), an Indonesian civil society organisation, applied to Making All Voices Count for a practitioner research and learning grant.
Kota Kita is an organisation of governance practitioners who focus on urban planning and citizen participation in the design and development of cities. Following several years of experience with participatory budgeting in Solo city, their research set out to examine participatory budgeting processes in six Indonesian cities, to inform their work – and the work of others – strengthening citizen participation in urban governance.
Their research looked at:
- the current status of participatory budgeting in six Indonesian cities
- the barriers and enablers to implementing participatory budgeting
- how government and CSOs can help make participatory budgeting more transparent, inclusive and impactful.This practice paper describes Kota Kita and its work in more detail, and reflects on the history and evolution of participatory budgeting in Indonesia. In doing so, it contextualises some of the findings of the research, and discusses their implications.
Key Themes in this Paper
- What are the risks and opportunities of institutionalising participation?
- How do access to information and use of new technologies have an impact onparticipation in budget planning processes?
- What does it take for participatory budgeting to be an empowering process for citizens?
- How can participatory budgeting include hard-to-reach citizens and accommodate different citizens’ needs? …(More)”.
Selected Readings on Algorithmic Scrutiny
By Prianka Srinivasan, Andrew Young and Stefaan Verhulst
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of algorithmic scrutiny was originally published in 2017.
Introduction
From government policy, to criminal justice, to our news feeds; to business and consumer practices, the processes that shape our lives both online and off are more and more driven by data and the complex algorithms used to form rulings or predictions. In most cases, these algorithms have created “black boxes” of decision making, where models remain inscrutable and inaccessible. It should therefore come as no surprise that several observers and policymakers are calling for more scrutiny of how algorithms are designed and work, particularly when their outcomes convey intrinsic biases or defy existing ethical standards.
While the concern about values in technology design is not new, recent developments in machine learning, artificial intelligence and the Internet of Things have increased the urgency to establish processes and develop tools to scrutinize algorithms.
In what follows, we have curated several readings covering the impact of algorithms on:
- Information Intermediaries;
- Governance
- Finance
- Justice
In addition we have selected a few readings that provide insight on possible processes and tools to establish algorithmic scrutiny.
Selected Reading List
Information Intermediaries
- Nicholas Diakopoulos – Algorithmic Accountability – Examines how algorithms exert power and influence on individuals’ lives, and what framework for “algorithmic accountability,” particularly in journalism, can be introduced to better investigate their effects.
- Nicholas Diakopoulos and Michael Koliska – Algorithmic Transparency in the News Media – Analyzes how the increased use of algorithms in journalism—through news bots and automated writing—may compromise the transparency and accountability of the industry.
- Philip M. Napoli – The Algorithm as Institution: Toward a Theoretical Framework for Automated Media Production and Consumption – Uses institutional theory to analyze the use of algorithms and automated news production and consumption tools in journalism.
- Lucas Introna and Helen Nissenbaum – Shaping the Web: Why the politics of search engines matters – An early paper that analyzes the risks of search engines in influencing politics and introducing bias in our knowledge gathering.
- Tarleton Gillespie – The Relevance of Algorithms – Provides a “conceptual map” to interrogate algorithms, their role in the information ecosystem, and the political implications of their use.
- Lee Rainie and Janna Anderson – Code-Dependent: Pros and Cons of the Algorithm Age – A report from the Pew Research Center that seeks to determine whether the net effect of the growing use of algorithms will be positive or negative.
- Zeynep Tufekci – Algorithmic Harms beyond Facebook and Google: Emergent Challenges of Computational Agency – Establishes some of the risks and harms in regard to algorithmic computation, particularly in their filtering abilities as seen in Facebook and other social media algorithms.
- Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi – The Ethics of Algorithms: Mapping the Debate – Suggests that algorithms are increasingly becoming the mediator between data and action in our societies, which obscures their ethical implications. Develops a framework of assessing the ethics of algorithms.
Governance
- Marijn Janssen – The challenges and limits of big data algorithms in technocratic governance – Investigates the lack of accountability and transparency of algorithms in creating policies and influencing governance. Argues that pure transparency is not adequate, but a greater understanding of how algorithms work is needed to improve their accountability.
- Natascha Just and Michael Latzer – Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet – Argues that algorithmic selection can influence our lives in the same way that institutions do, by constructing realities, affecting perceptions of the world, and influencing our behaviors.
Consumer Finance
- Mireille Hildebrandt – The Dawn of a Critical Transparency Right for the Profiling Era – Analyzes and attempts to predict changes in profiling capabilities for consumers, and what laws may develop to encourage their transparency.
- Brenda Reddix-Smalls – Credit Scoring and Trade Secrecy: An Algorithmic Quagmire or How the Lack of Transparency in Complex Financial Models Scuttled the Finance Market – Argues that the lack of transparency and regulation surrounding algorithms in financial markets—particularly their use in creating finance risk models—increases the likelihood of another financial crisis like the one on 2007-2008.
Justice
- Katja Franko Aas – Sentencing Transparency in the Information Age – Looks at the use of computer assisted sentencing tools, and how this can affect trust in society, particularly in a Scandinavian context.
- Gregory Cui – Evidence-Based Sentencing and the Taint of Dangerousness – Calls for greater scrutiny of “evidence-based sentencing,” where automated risk-assessment tools are used to profile, measure and predict a defendant’s risk of recidivism.
Tools & Process Toward Algorithmic Scrutiny
- Mike Ananny and Kate Crawford – Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability – Looks at the “inadequacy of transparency” for algorithmic systems, and the limitations of traditional ideals of accountability and transparency when it comes to algorithms.
- Anupam Datta, Shayak Sen and Yair Zick – Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems – Develops a formal model for algorithmic transparency, called Quantitative Input Influence (QII) which can provide better explanations over the decisions made by algorithmic systems.
- Bryce Goodman and Seth Flaxman – European Union regulations on algorithmic decision-making and a “right to explanation.” – Analyzes the content and implications of a new EU law that creates a “right to explanation,” whereby users can ask for an explanation of an algorithmic decision made about them.
- Rene F. Kizilcec – How Much Information? Effects of Transparency on Trust in an Algorithmic Interface – Studies how transparent designs of algorithmic interfaces can foster trust in users.
- Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu – Accountable Algorithms – Questions whether transparency itself will solve the accountability challenges of algorithms, and instead suggests that technological tools can help create automated decisions systems more in line with our legal and policy objectives.
Annotated Selected Reading List
Information Intermediaries
Diakopoulos, Nicholas. “Algorithmic accountability: Journalistic investigation of computational power structures.” Digital Journalism 3.3 (2015): 398-415. http://bit.ly/.
- This paper attempts to substantiate the notion of accountability for algorithms, particularly how they relate to media and journalism. It puts forward the notion of “algorithmic power,” analyzing the framework of influence such systems exert, and also introduces some of the challenges in the practice of algorithmic accountability, particularly for computational journalists.
- Offers a basis for how algorithms can be analyzed, built in terms of the types of decisions algorithms make in prioritizing, classifying, associating, and filtering information.
Diakopoulos, Nicholas, and Michael Koliska. “Algorithmic transparency in the news media.” Digital Journalism (2016): 1-20. http://bit.ly/2hMvXdE.
- This paper analyzes the increased use of “computational journalism,” and argues that though transparency remains a key tenet of journalism, the use of algorithms in gathering, producing and disseminating news undermines this principle.
- It first analyzes what the ethical principle of transparency means to journalists and the media. It then highlights the findings from a focus-group study, where 50 participants from the news media and academia were invited to discuss three different case studies related to the use of algorithms in journalism.
- They find two key barriers to algorithmic transparency in the media: “(1) a lack of business incentives for disclosure, and (2) the concern of overwhelming end-users with too much information.”
- The study also finds a variety of opportunities for transparency across the “data, model, inference, and interface” components of an algorithmic system.
Napoli, Philip M. “The algorithm as institution: Toward a theoretical framework for automated media production and consumption.” Fordham University Schools of Business Research Paper (2013). http://bit.ly/2hKBHqo
- This paper puts forward an analytical framework to discuss the algorithmic content creation of media and journalism in an attempt to “close the gap” on theory related to automated media production.
- By borrowing concepts from institutional theory, the paper finds that algorithms are distinct forms of media institutions, and the cultural and political implications of this interpretation.
- It urges further study in the field of “media sociology” to further unpack the influence of algorithms, and their role in institutionalizing certain norms, cultures and ways of thinking.
Introna, Lucas D., and Helen Nissenbaum. “Shaping the Web: Why the politics of search engines matters.” The Information Society 16.3 (2000): 169-185. http://bit.ly/2ijzsrg.
- This paper, published 16 years ago, provides an in-depth account of some of the risks related to search engine optimizations, and the biases and harms these can introduce, particularly on the nature of politics.
- Suggests search engines can be designed to account for these political dimensions, and better correlate with the ideal of the World Wide Web as being a place that is open, accessible and democratic.
- According to the paper, policy (and not the free market) is the only way to spur change in this field, though the current technical solutions we have introduce further challenges.
Gillespie, Tarleton. “The Relevance of Algorithms.” Media
technologies: Essays on communication, materiality, and society (2014): 167. http://bit.ly/2h6ASEu.
- This paper suggests that the extended use of algorithms, to the extent that they undercut many aspects of our lives, (Tarleton calls this public relevance algorithms) are fundamentally “producing and certifying knowledge.” In this ability to create a particular “knowledge logic,” algorithms are a primary feature of our information ecosystem.
- The paper goes on to map 6 dimensions of these public relevance algorithms:
- Patterns of inclusion
- Cycles of anticipation
- The evaluation of relevance
- The promise of algorithmic objectivity
- Entanglement with practice
- The production of calculated publics
- The paper concludes by highlighting the need for a sociological inquiry into the function, implications and contexts of algorithms, and to “soberly recognize their flaws and fragilities,” despite the fact that much of their inner workings remain hidden.
Rainie, Lee and Janna Anderson. “Code-Dependent: Pros and Cons of the Algorithm Age.” Pew Research Center. February 8, 2017. http://bit.ly/2kwnvCo.
- This Pew Research Center report examines the benefits and negative impacts of algorithms as they become more influential in different sectors and aspects of daily life.
- Through a scan of the research and practice, with a particular focus on the research of experts in the field, Rainie and Anderson identify seven key themes of the burgeoning Algorithm Age:
- Algorithms will continue to spread everywhere
- Good things lie ahead
- Humanity and human judgment are lost when data and predictive modeling become paramount
- Biases exist in algorithmically-organized systems
- Algorithmic categorizations deepen divides
- Unemployment will rise; and
- The need grows for algorithmic literacy, transparency and oversight
Tufekci, Zeynep. “Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency.” Journal on Telecommunications & High Technology Law 13 (2015): 203. http://bit.ly/1JdvCGo.
- This paper establishes some of the risks and harms in regard to algorithmic computation, particularly in their filtering abilities as seen in Facebook and other social media algorithms.
- Suggests that the editorial decisions performed by algorithms can have significant influence on our political and cultural realms, and categorizes the types of harms that algorithms may have on individuals and their society.
- Takes two case studies–one from the social media coverage of the Ferguson protests, the other on how social media can influence election turnouts–to analyze the influence of algorithms. In doing so, this paper lays out the “tip of the iceberg” in terms of some of the challenges and ethical concerns introduced by algorithmic computing.
Mittelstadt, Brent, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society (2016): 3(2). http://bit.ly/2kWNwL6
- This paper provides significant background and analysis of the ethical context of algorithmic decision-making. It primarily seeks to map the ethical consequences of algorithms, which have adopted the role of a mediator between data and action within societies.
- Develops a conceptual map of 6 ethical concerns:
-
- Inconclusive Evidence
-
- Inscrutable Evidence
-
- Misguided Evidence
-
- Unfair Outcomes
-
- Transformative Effects
- Traceability
-
- The paper then reviews existing literature, which together with the map creates a structure to inform future debate.
Governance
Janssen, Marijn, and George Kuk. “The challenges and limits of big data algorithms in technocratic governance.” Government Information Quarterly 33.3 (2016): 371-377. http://bit.ly/2hMq4z6.
- In regarding the centrality of algorithms in enforcing policy and extending governance, this paper analyzes the “technocratic governance” that has emerged by the removal of humans from decision making processes, and the inclusion of algorithmic automation.
- The paper argues that the belief in technocratic governance producing neutral and unbiased results, since their decision-making processes are uninfluenced by human thought processes, is at odds with studies that reveal the inherent discriminatory practices that exist within algorithms.
- Suggests that algorithms are still bound by the biases of designers and policy-makers, and that accountability is needed to improve the functioning of an algorithm. In order to do so, we must acknowledge the “intersecting dynamics of algorithm as a sociotechnical materiality system involving technologies, data and people using code to shape opinion and make certain actions more likely than others.”
Just, Natascha, and Michael Latzer. “Governance by algorithms: reality construction by algorithmic selection on the Internet.” Media, Culture & Society (2016): 0163443716643157. http://bit.ly/2h6B1Yv.
- This paper provides a conceptual framework on how to assess the governance potential of algorithms, asking how technology and software governs individuals and societies.
- By understanding algorithms as institutions, the paper suggests that algorithmic governance puts in place more evidence-based and data-driven systems than traditional governance methods. The result is a form of governance that cares more about effects than causes.
- The paper concludes by suggesting that algorithmic selection on the Internet tends to shape individuals’ realities and social orders by “increasing individualization, commercialization, inequalities, deterritorialization, and decreasing transparency, controllability, predictability.”
Consumer Finance
Hildebrandt, Mireille. “The dawn of a critical transparency right for the profiling era.” Digital Enlightenment Yearbook 2012 (2012): 41-56. http://bit.ly/2igJcGM.
- Analyzes the use of consumer profiling by online businesses in order to target marketing and services to their needs. By establishing how this profiling relates to identification, the author also offers some of the threats to democracy and the right of autonomy posed by these profiling algorithms.
- The paper concludes by suggesting that cross-disciplinary transparency is necessary to design more accountable profiling techniques that can match the extension of “smart environments” that capture ever more data and information from users.
Reddix-Smalls, Brenda. “Credit Scoring and Trade Secrecy: An Algorithmic Quagmire or How the Lack of Transparency in Complex Financial Models Scuttled the Finance Market.” UC Davis Business Law Journal 12 (2011): 87. http://bit.ly/2he52ch
- Analyzes the creation of predictive risk models in financial markets through algorithmic systems, particularly in regard to credit scoring. It suggests that these models were corrupted in order to maintain a competitive market advantage: “The lack of transparency and the legal environment led to the use of these risk models as predatory credit pricing instruments as opposed to accurate credit scoring predictive instruments.”
- The paper suggests that without greater transparency of these financial risk model, and greater regulation over their abuse, another financial crisis like that in 2008 is highly likely.
Justice
Aas, Katja Franko. “Sentencing Transparency in the Information Age.” Journal of Scandinavian Studies in Criminology and Crime Prevention 5.1 (2004): 48-61. http://bit.ly/2igGssK.
- This paper questions the use of predetermined sentencing in the US judicial system through the application of computer technology and sentencing information systems (SIS). By assessing the use of these systems between the English speaking world and Norway, the author suggests that such technological approaches to sentencing attempt to overcome accusations of mistrust, uncertainty and arbitrariness often leveled against the judicial system.
- However, in their attempt to rebuild trust, such technological solutions can be seen as an attempt to remedy a flawed view of judges by the public. Therefore, the political and social climate must be taken into account when trying to reform these sentencing systems: “The use of the various sentencing technologies is not only, and not primarily, a matter of technological development. It is a matter of a political and cultural climate and the relations of trust in a society.”
Cui, Gregory. “Evidence-Based Sentencing and the Taint of Dangerousness.” Yale Law Journal Forum 125 (2016): 315-315. http://bit.ly/1XLAvhL.
- This short essay submitted on the Yale Law Journal Forum calls for greater scrutiny of “evidence based sentencing,” where past data is computed and used to predict future criminal behavior of a defendant. The author suggests that these risk models may undermine the Constitution’s prohibition of bills of attainder, and also are unlawful for inflicting punishment without a judicial trial.
Tools & Processes Toward Algorithmic Scrutiny
Ananny, Mike and Crawford, Kate. “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.” New Media & Society. SAGE Publications. 2016. http://bit.ly/2hvKc5x.
- This paper attempts to critically analyze calls to improve the transparency of algorithms, asking how historically we are able to confront the limitations of the transparency ideal in computing.
- By establishing “transparency as an ideal” the paper tracks the philosophical and historical lineage of this principle, attempting to establish what laws and provisions were put in place across the world to keep up with and enforce this ideal.
- The paper goes on to detail the limits of transparency as an ideal, arguing, amongst other things, that it does not necessarily build trust, it privileges a certain function (seeing) over others (say, understanding) and that it has numerous technical limitations.
- The paper ends by concluding that transparency is an inadequate way to govern algorithmic systems, and that accountability must acknowledge the ability to govern across systems.
Datta, Anupam, Shayak Sen, and Yair Zick. “Algorithmic Transparency via Quantitative Input Influence.” Proceedings of 37th IEEE Symposium on Security and Privacy. 2016. http://bit.ly/2hgyLTp.
- This paper develops what is called a family of Quantitative Input Influence (QII) measures “that capture the degree of influence of inputs on outputs of systems.” The attempt is to theorize a transparency report that is to accompany any algorithmic decisions made, in order to explain any decisions and detect algorithmic discrimination.
- QII works by breaking “correlations between inputs to allow causal reasoning, and computes the marginal influence of inputs in situations where inputs cannot affect outcomes alone.”
- Finds that these QII measures are useful in scrutinizing algorithms when “black box” access is available.
Goodman, Bryce, and Seth Flaxman. “European Union regulations on algorithmic decision-making and a right to explanation” arXiv preprint arXiv:1606.08813 (2016). http://bit.ly/2h6xpWi.
- This paper analyzes the implications of a new EU law, to be enacted in 2018, that calls to “restrict automated individual decision-making (that is, algorithms that make decisions based on user level predictors) which ‘significantly affect’ users.” The law will also allow for a “right to explanation” where users can ask for an explanation behind automated decision made about them.
- The paper, while acknowledging the challenges in implementing such laws, suggests that such regulations can spur computer scientists to create algorithms and decision making systems that are more accountable, can provide explanations, and do not produce discriminatory results.
- The paper concludes by stating algorithms and computer systems should not aim to be simply efficient, but also fair and accountable. It is optimistic about the ability to put in place interventions to account for and correct discrimination.
Kizilcec, René F. “How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2016. http://bit.ly/2hMjFUR.
- This paper studies how transparency of algorithms affects our impression of trust by conducting an online field experiment, where participants enrolled in a MOOC a given different explanations for the computer generated grade given in their class.
- The study found that “Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust.”
- In conclusion, the study found that a balance of transparency was needed to maintain trust amongst the participants, suggesting that pure transparency of algorithmic processes and results may not correlate with high feelings of trust amongst users.
Kroll, Joshua A., et al. “Accountable Algorithms.” University of Pennsylvania Law Review 165 (2016). http://bit.ly/2i6ipcO.
- This paper suggests that policy and legal standards need to be updated given the increased use of algorithms to perform tasks and make decisions in arenas that people once did. An “accountability mechanism” is lacking in many of these automated decision making processes.
- The paper argues that mere transparency through the divulsion of source code is inadequate when confronting questions of accountability. Rather, technology itself provides a key to create algorithms and decision making apparatuses more inline with our existing political and legal frameworks.
- The paper assesses some computational techniques that may provide possibilities to create accountable software and reform specific cases of automated decisionmaking. For example, diversity and anti-discrimination orders can be built into technology to ensure fidelity to policy choices.
Harnessing the Power of Feedback Loops
Thomas Kalil and David Wilkinson at the White House: “When it comes to strengthening the public sector, the Federal Government looks for new ways to achieve better results for the people we serve. One promising tool that has gained momentum across numerous sectors in the last few years is the adoption of feedback loops. Systematically collecting data and learning from client and customer insights can benefit organizations across all sectors.
The collection of these valuable insights—and acting on them—remains an underutilized tool. The people who receive services are the experts on their effectiveness and usefulness. While the private sector has used customer feedback to improve products and services, the government and nonprofit sectors have often lagged behind. User experience is a critically important factor in driving positive outcomes. Getting honest feedback from service recipients can help nonprofit service providers and agencies at all levels of government ensure their work effectively addresses the needs of the people they serve. It’s equally important to close the loop by letting those who provided feedback know that their input was put to good use.
In September, the White House Office of Social Innovation and the White House Office of Science and Technology Policy (OSTP) hosted a workshop at the White House on data-driven feedback loops for the social and public sectors. The event brought together leaders across the philanthropy, nonprofit, and business sectors who discussed ways to collect and utilize feedback.
The program featured organizations in the nonprofit sector that use feedback to learn what works, what might not be working as well, and how to fix it. One organization, which offers comprehensive employment services to men and women with recent criminal convictions, explained that it has sought feedback from clients on its training program and learned that many people were struggling to find their work site locations and get to the sessions on time. The organization acted on this feedback, shifting their start times and providing maps and clearer directions to their participants. These two simple changes increased both participation in and satisfaction with their program.
Another organization collected feedback to learn whether factory workers attend and understand trainings on fire evacuation procedures. By collecting and acting on this feedback in Brazil, the organization was able to help a factory reduce fire-drill evacuation time from twelve minutes to two minutes—a life-saving result of seeking feedback.
With results such as these in mind, the White House has emphasized the importance of evidence and data-driven solutions across the Federal Government. …
USAID works to end extreme poverty in over 100 countries around the world. The Agency has recently changed its operational policy to enable programs to adapt to feedback from the communities in which they work. They did this by removing bureaucratic obstacles and encouraging more flexibility in their program design. For example, if a USAID-funded project designed to increase agricultural productivity is unexpectedly impacted by drought, the original plan may no longer be relevant or effective; the community may want drought-resistant crops instead. The new, more flexible policy is intended to ensure that such programs can pivot if a community provides feedback that its needs have changed or projects are not succeeding…(More)”
The Emergence of a Post-Fact World
Francis Fukuyama in Project Syndicate: “One of the more striking developments of 2016 and its highly unusual politics was the emergence of a “post-fact” world, in which virtually all authoritative information sources were called into question and challenged by contrary facts of dubious quality and provenance.
The emergence of the Internet and the World Wide Web in the 1990s was greeted as a moment of liberation and a boon for democracy worldwide. Information constitutes a form of power, and to the extent that information was becoming cheaper and more accessible, democratic publics would be able to participate in domains from which they had been hitherto excluded.
The development of social media in the early 2000s appeared to accelerate this trend, permitting the mass mobilization that fueled various democratic “color revolutions” around the world, from Ukraine to Burma (Myanmar) to Egypt. In a world of peer-to-peer communication, the old gatekeepers of information, largely seen to be oppressive authoritarian states, could now be bypassed.
While there was some truth to this positive narrative, another, darker one was also taking shape. Those old authoritarian forces were responding in dialectical fashion, learning to control the Internet, as in China, with its tens of thousands of censors, or, as in Russia, by recruiting legions of trolls and unleashing bots to flood social media with bad information. These trends all came together in a hugely visible way during 2016, in ways that bridged foreign and domestic politics….
The traditional remedy for bad information, according to freedom-of-information advocates, is simply to put out good information, which in a marketplace of ideas will rise to the top. This solution, unfortunately, works much less well in a social-media world of trolls and bots. There are estimates that as many as a third to a quarter of Twitter users fall into this category. The Internet was supposed to liberate us from gatekeepers; and, indeed, information now comes at us from all possible sources, all with equal credibility. There is no reason to think that good information will win out over bad information….
The inability to agree on the most basic facts is the direct product of an across-the-board assault on democratic institutions – in the US, in Britain, and around the world. And this is where the democracies are headed for trouble. In the US, there has in fact been real institutional decay, whereby powerful interest groups have been able to protect themselves through a system of unlimited campaign finance. The primary locus of this decay is Congress, and the bad behavior is for the most part as legal as it is widespread. So ordinary people are right to be upset.
And yet, the US election campaign has shifted the ground to a general belief that everything has been rigged or politicized, and that outright bribery is rampant. If the election authorities certify that your favored candidate is not the victor, or if the other candidate seemed to perform better in a debate, it must be the result of an elaborate conspiracy by the other side to corrupt the outcome. The belief in the corruptibility of all institutions leads to a dead end of universal distrust. American democracy, all democracy, will not survive a lack of belief in the possibility of impartial institutions; instead, partisan political combat will come to pervade every aspect of life….(More)”
The Signal Code
The Signal Code: “Humanitarian action adheres to the core humanitarian principles of impartiality, neutrality, independence, and humanity, as well as respect for international humanitarian and human rights law. These foundational principles are enshrined within core humanitarian doctrine, particularly the Red Cross/NGO Code of Conduct5 and the Humanitarian Charter.6 Together, these principles establish a duty of care for populations affected by the actions of humanitarian actors and impose adherence to a standard of reasonable care for those engaged in humanitarian action.
Engagement in HIAs, including the use of data and ICTs, must be consistent with these foundational principles and respect the human rights of crisis-affected people to be considered “humanitarian.” In addition to offering potential benefits to those affected by crisis, HIAs, including the use of ICTs, can cause harm to the safety, wellbeing, and the realization of the human rights of crisis-affected people. Absent a clear understanding of which rights apply to this context, the utilization of new technologies, and in particular experimental applications of these technologies, may be more likely to harm communities and violate the fundamental human rights of individuals.
The Signal Code is based on the application of the UDHR, the Nuremberg Code, the Geneva Convention, and other instruments of customary international law related to HIAs and the use of ICTs by crisis affected-populations and by humanitarians on their behalf. The fundamental human rights undergirding this Code are the rights to life, liberty, and security; the protection of privacy; freedom of expression; and the right to share in scientific advancement and its benefits as expressed in Articles 3, 12, 19, and 27 of the UDHR.7
The Signal Code asserts that all people have fundamental rights to access, transmit, and benefit from information as a basic humanitarian need; to be protected from harms that may result from the provision of information during crisis; to have a reasonable expectation of privacy and data security; to have agency over how their data is collected and used; and to seek redress and rectification when data pertaining to them causes harm or is inaccurate.
These rights are found to apply specifically to the access, collection, generation, processing, use, treatment, and transmission of information, including data, during humanitarian crises. These rights are also found herein to be interrelated and interdependent. To realize any of these rights individually requires realization of all of these rights in concert.
These rights are found to apply to all phases of the data lifecycle—before, during, and after the collection, processing, transmission, storage, or release of data. These rights are also found to be elastic, meaning that they apply to new technologies and scenarios that have not yet been identified or encountered by current practice and theory.
Data is, formally, a collection of symbols which function as a representation of information or knowledge. The term raw data is often used with two different meanings, the first being uncleaned data, that is, data that has been collected in an uncontrolled environment, and unprocessed data, which is collected data that has not been processed in such a way as to make it suitable for decision making. Colloquially, and in the humanitarian context, data is usually thought of solely in the machine readable or digital sense. For the purposes of the Signal Code, we use the term data to encompass information both in its analog and digital representations. Where it is necessary to address data solely in its digital representation, we refer to it as digital data.
No right herein may be used to abridge any other right. Nothing in this code may be interpreted as giving any state, group, or person the right to engage in any activity or perform any act that destroys the rights described herein.
The five human rights that exist specific to information and HIAs during humanitarian crises are the following:
The Right to Information
The Right to Protection
The Right to Data Security and Privacy
The Right to Data Agency
The Right to Redress and Rectification…(More)”
Crowdsourcing, Citizen Science, and Data-sharing
Sapien Labs: “The future of human neuroscience lies in crowdsourcing, citizen science and data sharing but it is not without its minefields.
A recent Scientific American article by Daniel Goodwin, “Why Neuroscience Needs Hackers,” makes the case that neuroscience, like many fields today, is drowning in data, begging for application of advances in computer science like machine learning. Neuroscientists are able to gather realms of neural data, but often without big data mechanisms and frameworks to synthesize them.
The SA article describes the work of Sebastian Seung, a Princeton neuroscientist, who recently mapped the neural connections of the human retina from an “overwhelming mass” of electron microscopy data using state of the art A.I. and massive crowd-sourcing. Seung incorporated the A.I. into a game called “Eyewire” where 1,000s of volunteers scored points while improving the neural map. Although the article’s title emphasizes advanced A.I., Dr. Seung’s experiment points even more to crowdsourcing and open science, avenues for improving research that have suddenly become easy and powerful with today’s internet. Eyewire perhaps epitomizes successful crowdsourcing — using an application that gathers, represents, and analyzes data uniformly according to researchers’ needs.
Crowdsourcing is seductive in its potential but risky for those who aren’t sure how to control it to get what they want. For researchers who don’t want to become hackers themselves, trying to turn the diversity of data produced by a crowd into conclusive results might seem too much of a headache to make it worthwhile. This is probably why the SA article title says we need hackers. The crowd is there but using it depends on innovative software engineering. A lot of researchers could really use software designed to flexibly support a diversity of crowdsourcing, some AI to enable things like crowd validation and big data tools.
The Potential
The SA article also points to Open BCI (brain-computer interface), mentioned here in other posts, as an example of how traditional divisions between institutional and amateur (or “citizen”) science are now crumbling; Open BCI is a community of professional and citizen scientists doing principled research with cheap, portable EEG-headsets producing professional research quality data. In communities of “neuro-hackers,” like NeurotechX, professional researchers, entrepreneurs, and citizen scientists are coming together to develop all kinds of applications, such as “telepathic” machine control, prostheses, and art. Other companies, like Neurosky sell EEG headsets and biosensors for bio-/neuro-feedback training and health-monitoring at consumer affordable pricing. (Read more in Citizen Science and EEG)
Tan Le, whose company Emotiv Lifesciences, also produces portable EEG head-sets, says, in an article in National Geographic, that neuroscience needs “as much data as possible on as many brains as possible” to advance diagnosis of conditions such as epilepsy and Alzheimer’s. Human neuroscience studies have typically consisted of 20 to 50 participants, an incredibly small sampling of a 7 billion strong humanity. For a single lab to collect larger datasets is difficult but with diverse populations across the planet real understanding may require data not even from thousands of brains but millions. With cheap mobile EEG-headsets, open-source software, and online collaboration, the potential for anyone can participate in such data collection is immense; the potential for crowdsourcing unprecedented. There are, however, significant hurdles to overcome….(More)”