World Development Report 2017: “Why are carefully designed, sensible policies too often not adopted or implemented? When they are, why do they often fail to generate development outcomes such as security, growth, and equity? And why do some bad policies endure? This World Development Report 2017: Governance and the Law addresses these fundamental questions, which are at the heart of development. Policy making and policy implementation do not occur in a vacuum. Rather, they take place in complex political and social settings, in which individuals and groups with unequal power interact within changing rules as they pursue conflicting interests. The process of these interactions is what this Report calls governance, and the space in which these interactions take place, the policy arena. The capacity of actors to commit and their willingness to cooperate and coordinate to achieve socially desirable goals are what matter for effectiveness. However, who bargains, who is excluded, and what barriers block entry to the policy arena determine the selection and implementation of policies and, consequently, their impact on development outcomes. Exclusion, capture, and clientelism are manifestations of power asymmetries that lead to failures to achieve security, growth, and equity. The distribution of power in society is partly determined by history. Yet, there is room for positive change. This Report reveals that governance can mitigate, even overcome, power asymmetries to bring about more effective policy interventions that achieve sustainable improvements in security, growth, and equity. This happens by shifting the incentives of those with power, reshaping their preferences in favor of good outcomes, and taking into account the interests of previously excluded participants. These changes can come about through bargains among elites and greater citizen engagement, as well as by international actors supporting rules that strengthen coalitions for reform….(More)”.
Social Media and the Internet of Things towards Data-Driven Policymaking in the Arab World: Potential, Limits and Concerns
Paper by Fadi Salem: “The influence of social media has continued to grow globally over the past decade. During 2016 social media played a highly influential role in what has been described as a “post truth” era in policymaking, diplomacy and political communication. For example, social media “bots” arguably played a key role in influencing public opinion globally, whether on the political or public policy levels. Such practices rely heavily on big data analytics, artificial intelligence and machine learning algorithms, not just in gathering and crunching public views and sentiments, but more so in pro-actively influencing public opinions, decisions and behaviors. Some of these government practices undermined traditional information mediums, triggered foreign policy crises, impacted political communication and disrupted established policy formulation cycles.
On the other hand, the digital revolution has expanded the horizon of possibilities for development, governance and policymaking. A new disruptive transformation is characterized by a fusion of inter-connected technologies where the digital, physical and biological worlds converge. This inter-connectivity is generating — and consuming — an enormous amount of data that is changing the ways policies are conducted, decisions are taken and day-to-day operations are carried out. Within this context, ‘big data’ applications are increasingly becoming critical elements of policymaking. Coupled with the rise of a critical mass of social media users globally, this ubiquitous connectivity and data revolution is promising major transformations in modes of governance, policymaking and citizen-government interaction.
In the Arab region, observations from public sector and decision-making organization suggest that there is limited understanding of the real potential, the limitations, and the public concerns surrounding these big data sources in the Arab region. This report contextualizes the findings in light of the socio-technical transformations taking place in the Arab region, by exploring the growth of social media and building on past editions in the series. The objective is to explore and assess multiple aspects of the ongoing digital transformation in the Arab world and highlight some of the policy implications on a regional level. More specifically, the report aims to better inform our understanding of the convergence of social media and IoT data as sources of big data and their potential impact on policymaking and governance in the region. Ultimately, in light of the availability of massive amount of data from physical objects and people, the questions tackled in the research are: What is the potential for data-driven policymaking and governance in the region? What are the limitations? And most importantly, what are the public concerns that need to be addressed by policymakers while they embark on next phase of the digital governance transformation in the region?
In the Arab region, there are already numerous experiments and applications where data from social media and the “Internet of Things” (IoT) are informing and influencing government practices as sources of big data, effectively changing how societies and governments interact. The report has two main parts. In the first part, we explore the questions discussed in the previous paragraphs through a regional survey spanning the 22 Arab countries. In the second part, it explores growth and usage trends of influential social media platforms across the region, including Facebook, Twitter, Linkedin and, for the first time, Instagram. The findings highlight important changes — and some stagnation — in the ways social media is infiltrating demographic layers in Arab societies, be it gender, age and language. Together, the findings provide important insights for guiding policymakers, business leaders and development efforts. More specifically, these findings can contribute to shaping directions and informing decisions on the future of governance and development in the Arab region….(More)”
Participatory budgeting in Indonesia: past, present and future
IDS Practice Paper by Francesca Feruglio and Ahmad Rifai: “In 2015, Yayasan Kota Kita (Our City Foundation), an Indonesian civil society organisation, applied to Making All Voices Count for a practitioner research and learning grant.
Kota Kita is an organisation of governance practitioners who focus on urban planning and citizen participation in the design and development of cities. Following several years of experience with participatory budgeting in Solo city, their research set out to examine participatory budgeting processes in six Indonesian cities, to inform their work – and the work of others – strengthening citizen participation in urban governance.
Their research looked at:
- the current status of participatory budgeting in six Indonesian cities
- the barriers and enablers to implementing participatory budgeting
- how government and CSOs can help make participatory budgeting more transparent, inclusive and impactful.This practice paper describes Kota Kita and its work in more detail, and reflects on the history and evolution of participatory budgeting in Indonesia. In doing so, it contextualises some of the findings of the research, and discusses their implications.
Key Themes in this Paper
- What are the risks and opportunities of institutionalising participation?
- How do access to information and use of new technologies have an impact onparticipation in budget planning processes?
- What does it take for participatory budgeting to be an empowering process for citizens?
- How can participatory budgeting include hard-to-reach citizens and accommodate different citizens’ needs? …(More)”.
Selected Readings on Algorithmic Scrutiny
By Prianka Srinivasan, Andrew Young and Stefaan Verhulst
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of algorithmic scrutiny was originally published in 2017.
Introduction
From government policy, to criminal justice, to our news feeds; to business and consumer practices, the processes that shape our lives both online and off are more and more driven by data and the complex algorithms used to form rulings or predictions. In most cases, these algorithms have created “black boxes” of decision making, where models remain inscrutable and inaccessible. It should therefore come as no surprise that several observers and policymakers are calling for more scrutiny of how algorithms are designed and work, particularly when their outcomes convey intrinsic biases or defy existing ethical standards.
While the concern about values in technology design is not new, recent developments in machine learning, artificial intelligence and the Internet of Things have increased the urgency to establish processes and develop tools to scrutinize algorithms.
In what follows, we have curated several readings covering the impact of algorithms on:
- Information Intermediaries;
- Governance
- Finance
- Justice
In addition we have selected a few readings that provide insight on possible processes and tools to establish algorithmic scrutiny.
Selected Reading List
Information Intermediaries
- Nicholas Diakopoulos – Algorithmic Accountability – Examines how algorithms exert power and influence on individuals’ lives, and what framework for “algorithmic accountability,” particularly in journalism, can be introduced to better investigate their effects.
- Nicholas Diakopoulos and Michael Koliska – Algorithmic Transparency in the News Media – Analyzes how the increased use of algorithms in journalism—through news bots and automated writing—may compromise the transparency and accountability of the industry.
- Philip M. Napoli – The Algorithm as Institution: Toward a Theoretical Framework for Automated Media Production and Consumption – Uses institutional theory to analyze the use of algorithms and automated news production and consumption tools in journalism.
- Lucas Introna and Helen Nissenbaum – Shaping the Web: Why the politics of search engines matters – An early paper that analyzes the risks of search engines in influencing politics and introducing bias in our knowledge gathering.
- Tarleton Gillespie – The Relevance of Algorithms – Provides a “conceptual map” to interrogate algorithms, their role in the information ecosystem, and the political implications of their use.
- Lee Rainie and Janna Anderson – Code-Dependent: Pros and Cons of the Algorithm Age – A report from the Pew Research Center that seeks to determine whether the net effect of the growing use of algorithms will be positive or negative.
- Zeynep Tufekci – Algorithmic Harms beyond Facebook and Google: Emergent Challenges of Computational Agency – Establishes some of the risks and harms in regard to algorithmic computation, particularly in their filtering abilities as seen in Facebook and other social media algorithms.
- Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi – The Ethics of Algorithms: Mapping the Debate – Suggests that algorithms are increasingly becoming the mediator between data and action in our societies, which obscures their ethical implications. Develops a framework of assessing the ethics of algorithms.
Governance
- Marijn Janssen – The challenges and limits of big data algorithms in technocratic governance – Investigates the lack of accountability and transparency of algorithms in creating policies and influencing governance. Argues that pure transparency is not adequate, but a greater understanding of how algorithms work is needed to improve their accountability.
- Natascha Just and Michael Latzer – Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet – Argues that algorithmic selection can influence our lives in the same way that institutions do, by constructing realities, affecting perceptions of the world, and influencing our behaviors.
Consumer Finance
- Mireille Hildebrandt – The Dawn of a Critical Transparency Right for the Profiling Era – Analyzes and attempts to predict changes in profiling capabilities for consumers, and what laws may develop to encourage their transparency.
- Brenda Reddix-Smalls – Credit Scoring and Trade Secrecy: An Algorithmic Quagmire or How the Lack of Transparency in Complex Financial Models Scuttled the Finance Market – Argues that the lack of transparency and regulation surrounding algorithms in financial markets—particularly their use in creating finance risk models—increases the likelihood of another financial crisis like the one on 2007-2008.
Justice
- Katja Franko Aas – Sentencing Transparency in the Information Age – Looks at the use of computer assisted sentencing tools, and how this can affect trust in society, particularly in a Scandinavian context.
- Gregory Cui – Evidence-Based Sentencing and the Taint of Dangerousness – Calls for greater scrutiny of “evidence-based sentencing,” where automated risk-assessment tools are used to profile, measure and predict a defendant’s risk of recidivism.
Tools & Process Toward Algorithmic Scrutiny
- Mike Ananny and Kate Crawford – Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability – Looks at the “inadequacy of transparency” for algorithmic systems, and the limitations of traditional ideals of accountability and transparency when it comes to algorithms.
- Anupam Datta, Shayak Sen and Yair Zick – Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems – Develops a formal model for algorithmic transparency, called Quantitative Input Influence (QII) which can provide better explanations over the decisions made by algorithmic systems.
- Bryce Goodman and Seth Flaxman – European Union regulations on algorithmic decision-making and a “right to explanation.” – Analyzes the content and implications of a new EU law that creates a “right to explanation,” whereby users can ask for an explanation of an algorithmic decision made about them.
- Rene F. Kizilcec – How Much Information? Effects of Transparency on Trust in an Algorithmic Interface – Studies how transparent designs of algorithmic interfaces can foster trust in users.
- Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu – Accountable Algorithms – Questions whether transparency itself will solve the accountability challenges of algorithms, and instead suggests that technological tools can help create automated decisions systems more in line with our legal and policy objectives.
Annotated Selected Reading List
Information Intermediaries
Diakopoulos, Nicholas. “Algorithmic accountability: Journalistic investigation of computational power structures.” Digital Journalism 3.3 (2015): 398-415. http://bit.ly/.
- This paper attempts to substantiate the notion of accountability for algorithms, particularly how they relate to media and journalism. It puts forward the notion of “algorithmic power,” analyzing the framework of influence such systems exert, and also introduces some of the challenges in the practice of algorithmic accountability, particularly for computational journalists.
- Offers a basis for how algorithms can be analyzed, built in terms of the types of decisions algorithms make in prioritizing, classifying, associating, and filtering information.
Diakopoulos, Nicholas, and Michael Koliska. “Algorithmic transparency in the news media.” Digital Journalism (2016): 1-20. http://bit.ly/2hMvXdE.
- This paper analyzes the increased use of “computational journalism,” and argues that though transparency remains a key tenet of journalism, the use of algorithms in gathering, producing and disseminating news undermines this principle.
- It first analyzes what the ethical principle of transparency means to journalists and the media. It then highlights the findings from a focus-group study, where 50 participants from the news media and academia were invited to discuss three different case studies related to the use of algorithms in journalism.
- They find two key barriers to algorithmic transparency in the media: “(1) a lack of business incentives for disclosure, and (2) the concern of overwhelming end-users with too much information.”
- The study also finds a variety of opportunities for transparency across the “data, model, inference, and interface” components of an algorithmic system.
Napoli, Philip M. “The algorithm as institution: Toward a theoretical framework for automated media production and consumption.” Fordham University Schools of Business Research Paper (2013). http://bit.ly/2hKBHqo
- This paper puts forward an analytical framework to discuss the algorithmic content creation of media and journalism in an attempt to “close the gap” on theory related to automated media production.
- By borrowing concepts from institutional theory, the paper finds that algorithms are distinct forms of media institutions, and the cultural and political implications of this interpretation.
- It urges further study in the field of “media sociology” to further unpack the influence of algorithms, and their role in institutionalizing certain norms, cultures and ways of thinking.
Introna, Lucas D., and Helen Nissenbaum. “Shaping the Web: Why the politics of search engines matters.” The Information Society 16.3 (2000): 169-185. http://bit.ly/2ijzsrg.
- This paper, published 16 years ago, provides an in-depth account of some of the risks related to search engine optimizations, and the biases and harms these can introduce, particularly on the nature of politics.
- Suggests search engines can be designed to account for these political dimensions, and better correlate with the ideal of the World Wide Web as being a place that is open, accessible and democratic.
- According to the paper, policy (and not the free market) is the only way to spur change in this field, though the current technical solutions we have introduce further challenges.
Gillespie, Tarleton. “The Relevance of Algorithms.” Media
technologies: Essays on communication, materiality, and society (2014): 167. http://bit.ly/2h6ASEu.
- This paper suggests that the extended use of algorithms, to the extent that they undercut many aspects of our lives, (Tarleton calls this public relevance algorithms) are fundamentally “producing and certifying knowledge.” In this ability to create a particular “knowledge logic,” algorithms are a primary feature of our information ecosystem.
- The paper goes on to map 6 dimensions of these public relevance algorithms:
- Patterns of inclusion
- Cycles of anticipation
- The evaluation of relevance
- The promise of algorithmic objectivity
- Entanglement with practice
- The production of calculated publics
- The paper concludes by highlighting the need for a sociological inquiry into the function, implications and contexts of algorithms, and to “soberly recognize their flaws and fragilities,” despite the fact that much of their inner workings remain hidden.
Rainie, Lee and Janna Anderson. “Code-Dependent: Pros and Cons of the Algorithm Age.” Pew Research Center. February 8, 2017. http://bit.ly/2kwnvCo.
- This Pew Research Center report examines the benefits and negative impacts of algorithms as they become more influential in different sectors and aspects of daily life.
- Through a scan of the research and practice, with a particular focus on the research of experts in the field, Rainie and Anderson identify seven key themes of the burgeoning Algorithm Age:
- Algorithms will continue to spread everywhere
- Good things lie ahead
- Humanity and human judgment are lost when data and predictive modeling become paramount
- Biases exist in algorithmically-organized systems
- Algorithmic categorizations deepen divides
- Unemployment will rise; and
- The need grows for algorithmic literacy, transparency and oversight
Tufekci, Zeynep. “Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency.” Journal on Telecommunications & High Technology Law 13 (2015): 203. http://bit.ly/1JdvCGo.
- This paper establishes some of the risks and harms in regard to algorithmic computation, particularly in their filtering abilities as seen in Facebook and other social media algorithms.
- Suggests that the editorial decisions performed by algorithms can have significant influence on our political and cultural realms, and categorizes the types of harms that algorithms may have on individuals and their society.
- Takes two case studies–one from the social media coverage of the Ferguson protests, the other on how social media can influence election turnouts–to analyze the influence of algorithms. In doing so, this paper lays out the “tip of the iceberg” in terms of some of the challenges and ethical concerns introduced by algorithmic computing.
Mittelstadt, Brent, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society (2016): 3(2). http://bit.ly/2kWNwL6
- This paper provides significant background and analysis of the ethical context of algorithmic decision-making. It primarily seeks to map the ethical consequences of algorithms, which have adopted the role of a mediator between data and action within societies.
- Develops a conceptual map of 6 ethical concerns:
-
- Inconclusive Evidence
-
- Inscrutable Evidence
-
- Misguided Evidence
-
- Unfair Outcomes
-
- Transformative Effects
- Traceability
-
- The paper then reviews existing literature, which together with the map creates a structure to inform future debate.
Governance
Janssen, Marijn, and George Kuk. “The challenges and limits of big data algorithms in technocratic governance.” Government Information Quarterly 33.3 (2016): 371-377. http://bit.ly/2hMq4z6.
- In regarding the centrality of algorithms in enforcing policy and extending governance, this paper analyzes the “technocratic governance” that has emerged by the removal of humans from decision making processes, and the inclusion of algorithmic automation.
- The paper argues that the belief in technocratic governance producing neutral and unbiased results, since their decision-making processes are uninfluenced by human thought processes, is at odds with studies that reveal the inherent discriminatory practices that exist within algorithms.
- Suggests that algorithms are still bound by the biases of designers and policy-makers, and that accountability is needed to improve the functioning of an algorithm. In order to do so, we must acknowledge the “intersecting dynamics of algorithm as a sociotechnical materiality system involving technologies, data and people using code to shape opinion and make certain actions more likely than others.”
Just, Natascha, and Michael Latzer. “Governance by algorithms: reality construction by algorithmic selection on the Internet.” Media, Culture & Society (2016): 0163443716643157. http://bit.ly/2h6B1Yv.
- This paper provides a conceptual framework on how to assess the governance potential of algorithms, asking how technology and software governs individuals and societies.
- By understanding algorithms as institutions, the paper suggests that algorithmic governance puts in place more evidence-based and data-driven systems than traditional governance methods. The result is a form of governance that cares more about effects than causes.
- The paper concludes by suggesting that algorithmic selection on the Internet tends to shape individuals’ realities and social orders by “increasing individualization, commercialization, inequalities, deterritorialization, and decreasing transparency, controllability, predictability.”
Consumer Finance
Hildebrandt, Mireille. “The dawn of a critical transparency right for the profiling era.” Digital Enlightenment Yearbook 2012 (2012): 41-56. http://bit.ly/2igJcGM.
- Analyzes the use of consumer profiling by online businesses in order to target marketing and services to their needs. By establishing how this profiling relates to identification, the author also offers some of the threats to democracy and the right of autonomy posed by these profiling algorithms.
- The paper concludes by suggesting that cross-disciplinary transparency is necessary to design more accountable profiling techniques that can match the extension of “smart environments” that capture ever more data and information from users.
Reddix-Smalls, Brenda. “Credit Scoring and Trade Secrecy: An Algorithmic Quagmire or How the Lack of Transparency in Complex Financial Models Scuttled the Finance Market.” UC Davis Business Law Journal 12 (2011): 87. http://bit.ly/2he52ch
- Analyzes the creation of predictive risk models in financial markets through algorithmic systems, particularly in regard to credit scoring. It suggests that these models were corrupted in order to maintain a competitive market advantage: “The lack of transparency and the legal environment led to the use of these risk models as predatory credit pricing instruments as opposed to accurate credit scoring predictive instruments.”
- The paper suggests that without greater transparency of these financial risk model, and greater regulation over their abuse, another financial crisis like that in 2008 is highly likely.
Justice
Aas, Katja Franko. “Sentencing Transparency in the Information Age.” Journal of Scandinavian Studies in Criminology and Crime Prevention 5.1 (2004): 48-61. http://bit.ly/2igGssK.
- This paper questions the use of predetermined sentencing in the US judicial system through the application of computer technology and sentencing information systems (SIS). By assessing the use of these systems between the English speaking world and Norway, the author suggests that such technological approaches to sentencing attempt to overcome accusations of mistrust, uncertainty and arbitrariness often leveled against the judicial system.
- However, in their attempt to rebuild trust, such technological solutions can be seen as an attempt to remedy a flawed view of judges by the public. Therefore, the political and social climate must be taken into account when trying to reform these sentencing systems: “The use of the various sentencing technologies is not only, and not primarily, a matter of technological development. It is a matter of a political and cultural climate and the relations of trust in a society.”
Cui, Gregory. “Evidence-Based Sentencing and the Taint of Dangerousness.” Yale Law Journal Forum 125 (2016): 315-315. http://bit.ly/1XLAvhL.
- This short essay submitted on the Yale Law Journal Forum calls for greater scrutiny of “evidence based sentencing,” where past data is computed and used to predict future criminal behavior of a defendant. The author suggests that these risk models may undermine the Constitution’s prohibition of bills of attainder, and also are unlawful for inflicting punishment without a judicial trial.
Tools & Processes Toward Algorithmic Scrutiny
Ananny, Mike and Crawford, Kate. “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.” New Media & Society. SAGE Publications. 2016. http://bit.ly/2hvKc5x.
- This paper attempts to critically analyze calls to improve the transparency of algorithms, asking how historically we are able to confront the limitations of the transparency ideal in computing.
- By establishing “transparency as an ideal” the paper tracks the philosophical and historical lineage of this principle, attempting to establish what laws and provisions were put in place across the world to keep up with and enforce this ideal.
- The paper goes on to detail the limits of transparency as an ideal, arguing, amongst other things, that it does not necessarily build trust, it privileges a certain function (seeing) over others (say, understanding) and that it has numerous technical limitations.
- The paper ends by concluding that transparency is an inadequate way to govern algorithmic systems, and that accountability must acknowledge the ability to govern across systems.
Datta, Anupam, Shayak Sen, and Yair Zick. “Algorithmic Transparency via Quantitative Input Influence.” Proceedings of 37th IEEE Symposium on Security and Privacy. 2016. http://bit.ly/2hgyLTp.
- This paper develops what is called a family of Quantitative Input Influence (QII) measures “that capture the degree of influence of inputs on outputs of systems.” The attempt is to theorize a transparency report that is to accompany any algorithmic decisions made, in order to explain any decisions and detect algorithmic discrimination.
- QII works by breaking “correlations between inputs to allow causal reasoning, and computes the marginal influence of inputs in situations where inputs cannot affect outcomes alone.”
- Finds that these QII measures are useful in scrutinizing algorithms when “black box” access is available.
Goodman, Bryce, and Seth Flaxman. “European Union regulations on algorithmic decision-making and a right to explanation” arXiv preprint arXiv:1606.08813 (2016). http://bit.ly/2h6xpWi.
- This paper analyzes the implications of a new EU law, to be enacted in 2018, that calls to “restrict automated individual decision-making (that is, algorithms that make decisions based on user level predictors) which ‘significantly affect’ users.” The law will also allow for a “right to explanation” where users can ask for an explanation behind automated decision made about them.
- The paper, while acknowledging the challenges in implementing such laws, suggests that such regulations can spur computer scientists to create algorithms and decision making systems that are more accountable, can provide explanations, and do not produce discriminatory results.
- The paper concludes by stating algorithms and computer systems should not aim to be simply efficient, but also fair and accountable. It is optimistic about the ability to put in place interventions to account for and correct discrimination.
Kizilcec, René F. “How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2016. http://bit.ly/2hMjFUR.
- This paper studies how transparency of algorithms affects our impression of trust by conducting an online field experiment, where participants enrolled in a MOOC a given different explanations for the computer generated grade given in their class.
- The study found that “Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust.”
- In conclusion, the study found that a balance of transparency was needed to maintain trust amongst the participants, suggesting that pure transparency of algorithmic processes and results may not correlate with high feelings of trust amongst users.
Kroll, Joshua A., et al. “Accountable Algorithms.” University of Pennsylvania Law Review 165 (2016). http://bit.ly/2i6ipcO.
- This paper suggests that policy and legal standards need to be updated given the increased use of algorithms to perform tasks and make decisions in arenas that people once did. An “accountability mechanism” is lacking in many of these automated decision making processes.
- The paper argues that mere transparency through the divulsion of source code is inadequate when confronting questions of accountability. Rather, technology itself provides a key to create algorithms and decision making apparatuses more inline with our existing political and legal frameworks.
- The paper assesses some computational techniques that may provide possibilities to create accountable software and reform specific cases of automated decisionmaking. For example, diversity and anti-discrimination orders can be built into technology to ensure fidelity to policy choices.
Citizens give feedback on city development via Tinder-style app
Springwise: “CitySwipe is Downtown Santa Monica Inc’s opinion gathering app. The non-profit organization manages the center of the city and is using the app as part of the local government’s consultation on its Downtown Community Plan. The plan provides proposals for the area’s next 20 years of development and includes strategies for increased accessibility and affordable housing and improved public spaces.
The original plan had been to close the consultation period in early 2016 but in order to better reach and interact with as many locals as possible, the review was extended to early 2017. Like Tinder, users of the app swipe left or right depending on their views. Questions are either Yes or No or “Which do you prefer?” and each question is illustrated with a photo. There are 38 questions in total ranging from building design and public art to outdoor concerts and parking. Additional information is gathered by asking users to provide their location and preferred method of transport.
Mexico City recently conducted a city-wide consultation on its new constitution, and Oslo, Norway, is using an app to involve school children in redesigning safe public walkways and cycle paths….(More)”
GSK and MIT Flumoji app tracks influenza outbreaks with crowdsourcing
Beth Snyder Bulik at FiercePharma: “It’s like Waze for the flu. A new GlaxoSmithKline-sponsored app called Flumoji uses crowdsourced data to track influenza movement in real time.
Developed with MIT’s Connection Science, the Flumoji app gathers data passively and identifies fluctuations in users’ activity and social interactions to try to identify when a person gets the flu. The activity data is combined with traditional flu tracking data from the Centers for Disease Control to help determine outbreaks. The Flumoji study runs through April, when it will be taken down from the Android app store and no more data will be collected from users.
To make the app more engaging for users, Flumoji uses emojis to help users identify how they’re feeling. If it’s a flu day, symptom faces with thermometers, runny noses and coughs can be chosen, while on other days, users can show how they’re feeling with more traditional mood emojis.
The app has been installed on 500-1,000 Android phones, according to Google Play data.
“Mobile phones are a widely available and efficient way to monitor patient health. GSK has been using them in its studies to monitor activity and vital signs in study patients, and collect patient feedback to improve decision making in the development of new medicines. Tracking the flu is just the latest test of this technology,” Mary Anne Rhyne, a GSK director of external communications for R&D in the U.S., told FiercePharma in an email interview…(More)”
Numbers and the Making of Us: Counting and the Course of Human Cultures
Book by Caleb Everett: “Carved into our past, woven into our present, numbers shape our perceptions of the world and of ourselves much more than we commonly think. Numbers and the Making of Us is a sweeping account of how numbers radically enhanced our species’ cognitive capabilities and sparked a revolution in human culture. Caleb Everett brings new insights in psychology, anthropology, primatology, linguistics, and other disciplines to bear in explaining the myriad human behaviors and modes of thought numbers have made possible, from enabling us to conceptualize time in new ways to facilitating the development of writing, agriculture, and other advances of civilization.
Number concepts are a human invention—a tool, much like the wheel, developed and refined over millennia. Numbers allow us to grasp quantities precisely, but they are not innate. Recent research confirms that most specific quantities are not perceived in the absence of a number system. In fact, without the use of numbers, we cannot precisely grasp quantities greater than three; our minds can only estimate beyond this surprisingly minuscule limit.
Everett examines the various types of numbers that have developed in different societies, showing how most number systems derived from anatomical factors such as the number of fingers on each hand. He details fascinating work with indigenous Amazonians who demonstrate that, unlike language, numbers are not a universal human endowment. Yet without numbers, the world as we know it would not exist….(More)”.
Using GitHub in Government: A Look at a New Collaboration Platform
Justin Longo at the Center for Policy Informatics: “…I became interested in the potential for using GitHub to facilitate collaboration on text documents. This was largely inspired by the 2012 TED Talk by Clay Shirky where he argued that open source programmers could teach us something about how to do open governance:
Somebody put up a tool during the copyright debate last year in the Senate, saying, “It’s strange that Hollywood has more access to Canadian legislators than Canadian citizens do. Why don’t we use GitHub to show them what a citizen-developed bill might look like?” …
For this research, we undertook a census of Canadian government and public servant accounts on GitHub and surveyed those users, supplemented by interviews with key government technology leaders.
This research has now been published in the journal Canadian Public Administration. (If you don’t have access to the full document through the publisher, you can also find it here).
Despite the growing enthusiasm for GitHub (mostly from those familiar with open source software development), and the general rhetoric in favour of collaboration, we suspected that getting GitHub used in public sector organizations for text collaboration might be an uphill battle – not least of which because of the steep learning curve involved in using GitHub, and its inflexibility when being used to edit text.
The history of computer-supported collaborative work platforms is littered with really cool interfaces that failed to appeal to users. The experience to date with GitHub in Canadian governments reflects this, as far as our research shows.
We found few government agencies having an active presence on GitHub compared to social media presence in general. And while federal departments and public servants on GitHub are rare, provincial, territorial, First Nations and local governments are even rarer.
For individual accounts held by public servants, most were found in the federal government at higher rates than those found in broader society (see Mapping Collaborative Software). Within this small community, the distribution of contributions per user follows the classic long-tail distribution with a small number of contributors responsible for most of the work, a larger number of contributors doing very little on average, and many users contributing nothing.
GitHub is still resisted by all but the most technically savvy. With a peculiar terminology and work model that presupposes a familiarity with command line computer operations and the language of software coding, using GitHub presents many barriers to the novice user. But while it is tempting to dismiss GitHub, as it currently exists, as ill-suited as a collaboration tool to support document writing, it holds potential as a useful platform for facilitating collaboration in the public sector.
As an example, to help understand how GitHub might be used within governments for collaboration on text documents, we discuss a briefing note document flow in the paper (see the paper for a description of this lovely graphic).
A few other finding are addressed in the paper, from why public servants may choose not to collaborate even though they believe it’s the right thing to do, to an interesting story about what propelled the use of GitHub in the government of Canada in the first place….(More)”
International Open Data Roadmap
IODC16: We have entered the next phase in the evolution of the open data movement. Just making data publicly available can no longer be the beginning and end of every conversation about open data. The focus of the movement is now shifting to building open data communities, and an increasingly sophisticated network of communities have begun to make data truly useful in addressing a myriad of problems facing citizens and their governments around the world:
- More than 40 national and local governments have already committed to implement the principles of the International Open Data Charter;
- Open data is central to many commitments made this year by world leaders, including the Sustainable Development Goals (SDGs), the Paris Climate Agreement, and the G20 Anti Corruption Data Principles; and
- Open data is also an increasingly local issue, as hundreds of cities and sub-national governments implement open data policies to drive transparency, economic growth, and service delivery in close collaboration with citizens.
To further accelerate collaboration and increase the impact of open data activities globally, the Government of Spain, the International Development Research Centre, the World Bank, and the Open Data for Development Network recently hosted the fourth International Open Data Conference (IODC) on October 6-7, 2106 in Madrid, Spain.
Under the theme of Global Goals, Local Impact, the fourth IODC reconvened an ever expanding open data community to showcase best practices, confront shared challenges, and deepen global and regional collaboration in an effort to maximize the impact of open data. Supported by a full online archive of the 80+ sessions and 20+ special events held in Madrid during the first week of October 2016, this report reflects on the discussions and debates that took place, as well as the information shared on a wide range of vibrant global initiatives, in order to map out the road ahead, strengthen cohesion among existing efforts, and explore new ways to use open data to drive social and economic inclusion around the world….(More)”
The Hackable City: Citymaking in a Platform Society Authors
Martijn de Waal, Michiel de Lange, and Matthijs Bouw in Special Issue on 4D Hyperlocal: A Cultural Toolkit for the Open-Source City of Architectural Design: ” Can computer hacking have positive parallels in the shaping of the built environment? The Hackable City research project was set up with this question in mind, to investigate the potential of digital platforms to open up the citymaking process. Its cofounders Martijn de Waal, Michiel de Lange and Matthijs Bouw here outline the tendencies that their studies of collaborative urban development initiatives around the world have revealed, and ask whether knowledge sharing and incremental change might be a better way forward than top-down masterplans….(More)”