Digital Democracy: The Tools Transforming Political Engagement


Paper by Julie Simon, Theo Bass, Victoria Boelman and Geoff Mulgan: “… shares lessons from Nesta’s research into some of the pioneering innovations in digital democracy which are taking place across Europe and beyond.

Key findings

  • Digital democracy is a broad concept and not easy to define. The paper provides a granular approach to help encompass its various activities and methods (our ‘typology of digital democracy’).
  • Many initiatives exist simply as an app, or web page, driven by what the technology can do, rather than by what the need is.
  • Lessons from global case studies describe how digital tools are being used to engage communities in more meaningful political participation, and how they are improving the quality and legitimacy of decision-making.
  • Digital democracy is still young. Projects must embed better methods for evaluation of their goals if the field is to grow.

Thanks to digital technologies, today we can bank, read the news, study for a degree, and chat with friends across the world – all without leaving the comfort of our homes. But one area that seems to have remained impervious to these benefits is our model of democratic governance, which has remained largely unchanged since it was invented in the 20th century.

New experiments are showing how digital technologies can play a critical role in engaging new groups of people, empowering citizens and forging a new relationship between cities and local residents, and parliamentarians and citizens.

At the parliamentary level, including in Brazil and France, experiments with new tools are enabling citizens to contribute to draft legislation. Political parties such as Podemos in Spain and the Icelandic Pirate Party are using tools such as Loomio, Reddit and Discourse to enable party members and the general public to deliberate and feed into policy proposals. Local governments have set up platforms to enable citizens to submit ideas and information, rank priorities and allocate public resources…..

Lessons from the innovators 

  • Develop a clear plan and process: Pioneers in the field engage people meaningfully by giving them a clear stake; they conduct stakeholder analysis; operate with full transparency; and access harder-to-reach groups with offline methods.
  • Get the necessary support in place: The most successful initiatives have clear-backing from lawmakers; they also secure the necessary resources to promote to the process properly (PR and advertising), as well as the internal systems to manage and evaluate large numbers of ideas.
  • Choose the right tools: The right digital tools help to improve the user-experience and understanding of the issue, and can help remove some of the negative impacts of those who might try to damage or ‘game’ the process….(More)”

Thesis, antithesis and synthesis: A constructive direction for politics and policy after Brexit and Trump


Geoff Mulgan at Nesta: “In the heady days of 1989, with communism collapsing and the Cold War seemingly over, the political theorist Francis Fukuyama declared that we were witnessing the “end of history” which had culminated in the triumph of liberal democracy and the free market.

Fukuyama was drawing on the ideas of German philosopher Georg Hegel, but of course, history didn’t come to an end, and, as recent events have shown, the Cold War was just sleeping, not dead.

Now, following the political convulsions of 2016, we’re at a very different turning point, which many are trying to make sense of. I want to suggest that we can again usefully turn to Hegel, but this time to his idea that history evolves in dialectical ways, with successive phases of thesis, antithesis and synthesis.

This framework fits well with where we stand today.  The ‘thesis’ that has dominated mainstream politics for the last generation – and continues to be articulated shrilly by many proponents – is the claim that the combination of globalisation, technological progress and liberalisation empowers the great majority.

The antithesis, which, in part, fuelled the votes for Brexit and Trump, as well as the rise of populist parties and populist authoritarian leaders in Europe and beyond, is the argument that this technocratic combination merely empowers a minority and disempowers the majority of citizens.

A more progressive synthesis – which I will outline – then has to address the flaws of the thesis and the grievances of the antithesis, in fields ranging from education and health to democracy and migration, dealing head on with questions of power and its distribution: questions about who has power, and who feels powerful….(More)”

Open innovation in the public sector


Sabrina Diaz Rato in OpenDemocracy: “For some years now, we have been witnessing the emergence of relational, cross-over, participative power. This is the territory that gives technopolitics its meaning and prominence, the basis on which a new vision of democracy – more open, more direct, more interactive – is being developed and embraced. It is a framework that overcomes the closed architecture on which the praxis of governance (closed, hierarchical, one-way) have been cemented in almost all areas. The series The ecosystem of open democracy explores the different aspects of this ongoing transformation….

How can innovation contribute to building an open democracy? The answer is summed up in these ten connectors of innovation.

  1. placing innovation and collective intelligence at the center of public management strategies,
  2. aligning all government areas with clearly-defined goals on associative platforms,
  3. shifting the frontiers of knowledge and action from the institutions to public deliberation on local challenges,
  4. establishing leadership roles, in a language that everyone can easily understand, to organize and plan the wealth of information coming out of citizens’ ideas and to engage those involved in the sustainability of the projects,
  5. mapping the ecosystem and establishing dynamic relations with internal and, particularly, external agents: the citizens,
  6. systematizing the accumulation of information and the creative processes, while communicating progress and giving feedback to the whole community,
  7. preparing society as a whole to experience a new form of governance of the common good,
  8. cooperating with universities, research centers and entrepreneurs in establishing reward mechanisms,
  9. aligning people, technologies, institutions and the narrative with the new urban habits, especially those related to environmental sustainability and public services,
  10. creating education and training programs in tune with the new skills of the 21st century,
  11. building incubation spaces for startups responding to local challenges,
  12. inviting venture capital to generate a satisfactory mix of open innovation, inclusive development policies and local productivity.

Two items in this list are probably the determining factors of any effective innovation process. The first has to do with the correct decision on the mechanisms through which we have pushed the boundaries outwards, so as to bring citizen ideas into the design and co-creation of solutions. This is not an easy task, because it requires a shared organizational mentality on previously non-existent patterns of cooperation, which must now be sustained through dialog and operational dynamics aimed at solving problems defined by external actors – not just any problem.

Another key aspect of the process, related to the breaking down of the institutional barriers that surround and condition action frameworks, is the revaluation of a central figure that we have not yet mentioned here: the policy makers. They are not exactly political leaders or public officials. They are not innovators either. They are the ones within Public Administration who possess highly valuable management skills and knowledge, but who are constantly colliding against the glittering institutional constellations that no longer work….(More)”

Denmark is appointing an ambassador to big tech


Matthew Hughes in The Next Web: “Question: Is Facebook a country? It sounds silly, but when you think about it, it does have many attributes in common with nation states. For starters, it’s got a population that’s bigger than that of India, and its 2016 revenue wasn’t too far from Estonia’s GDP. It also has a ‘national ethos’. If America’s philosophy is capitalism, Cuba’s is communism, and Sweden’s is social democracy, Facebook’s is ‘togetherness’, as corny as that may sound.

 Given all of the above, is it really any surprise that Denmark is considering appointing a ‘big tech ambassador’ whose job is to establish and manage the country’s relationship with the world’s most powerful tech companies?

Denmark’s “digital ambassador” is a first. No country has ever created such a role. Their job will be to liase with the likes of Google, Twitter, Facebook.

Given the fraught relationship many European countries have with American big-tech – especially on issues of taxation, privacy, and national security – Denmark’s decision to extend an olive branch seems sensible.

Speaking with the Washington Post, Danish Foreign Minister Anders Samuelsen said, “just as we engage in a diplomatic dialogue with countries, we also need to establish and prioritize comprehensive relations with tech actors, such as Google, Facebook, Apple and so on. The idea is, we see a lot of companies and new technologies that will in many ways involve and be part of everyday life of citizens in Denmark.”….(More)”

Will Democracy Survive Big Data and Artificial Intelligence?


Dirk Helbing, Bruno S. Frey, Gerd Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne Hofstetter, Jeroen van den Hoven, Roberto V. Zicari, and Andrej Zwitter in Scientific American: “….In summary, it can be said that we are now at a crossroads (see Fig. 2). Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society—for better or worse. If such widespread technologies are not compatible with our society’s core values, sooner or later they will cause extensive damage. They could lead to an automated society with totalitarian features. In the worst case, a centralized artificial intelligence would control what we know, what we think and how we act. We are at the historic moment, where we have to decide on the right path—a path that allows us all to benefit from the digital revolution. Therefore, we urge to adhere to the following fundamental principles:

1. to increasingly decentralize the function of information systems;

2. to support informational self-determination and participation;

3. to improve transparency in order to achieve greater trust;

4. to reduce the distortion and pollution of information;

5. to enable user-controlled information filters;

6. to support social and economic diversity;

7. to improve interoperability and collaborative opportunities;

8. to create digital assistants and coordination tools;

9. to support collective intelligence, and

10. to promote responsible behavior of citizens in the digital world through digital literacy and enlightenment.

Following this digital agenda we would all benefit from the fruits of the digital revolution: the economy, government and citizens alike. What are we waiting for?A strategy for the digital age

Big data and artificial intelligence are undoubtedly important innovations. They have an enormous potential to catalyze economic value and social progress, from personalized healthcare to sustainable cities. It is totally unacceptable, however, to use these technologies to incapacitate the citizen. Big nudging and citizen scores abuse centrally collected personal data for behavioral control in ways that are totalitarian in nature. This is not only incompatible with human rights and democratic principles, but also inappropriate to manage modern, innovative societies. In order to solve the genuine problems of the world, far better approaches in the fields of information and risk management are required. The research area of responsible innovation and the initiative ”Data for Humanity” (see “Big Data for the benefit of society and humanity”) provide guidance as to how big data and artificial intelligence should be used for the benefit of society….(More)”

Democracy in Action


Bea Schofield at Wazoku: “…RSA (Royal Society for the encouragement of Arts, Manufactures and Commerce),…start off a journey towards a more innovative, inclusive and democratic policy-making model. The first in their series of three public Challenges has launched today, with a focus on how the economy can work for everyone. RSA is calling on the public to share ideas on how we, as purchasers, can get a better deal when going about our everyday lives. This might be something as small as buying groceries at the supermarket, something more substantial, such as deciding on which phone contract to choose, or even something as significant as buying a new home.

Sitting within a wider programme addressing the lack of transparency and involvement of the public when it comes to policy-making, the RSA’s mission is to do two things:

  1. Involve a broader cross-section of society in the decision-making process which will affect our livelihoods, such as our spending habits, how much tax we pay and what services we have access to.
  2. Help shape a more participatory model for policy-making which is open, collaborative and innovative….(More)”.

Selected Readings on Algorithmic Scrutiny


By Prianka Srinivasan, Andrew Young and Stefaan Verhulst

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of algorithmic scrutiny was originally published in 2017.

Introduction

From government policy, to criminal justice, to our news feeds; to business and consumer practices, the processes that shape our lives both online and off are more and more driven by data and the complex algorithms used to form rulings or predictions. In most cases, these algorithms have created “black boxes” of decision making, where models remain inscrutable and inaccessible. It should therefore come as no surprise that several observers and policymakers are calling for more scrutiny of how algorithms are designed and work, particularly when their outcomes convey intrinsic biases or defy existing ethical standards.

While the concern about values in technology design is not new, recent developments in machine learning, artificial intelligence and the Internet of Things have increased the urgency to establish processes and develop tools to scrutinize algorithms.

In what follows, we have curated several readings covering the impact of algorithms on:

  • Information Intermediaries;
  • Governance
  • Finance
  • Justice

In addition we have selected a few readings that provide insight on possible processes and tools to establish algorithmic scrutiny.

Selected Reading List

Information Intermediaries

Governance

Consumer Finance

Justice

Tools & Process Toward Algorithmic Scrutiny

Annotated Selected Reading List

Information Intermediaries

Diakopoulos, Nicholas. “Algorithmic accountability: Journalistic investigation of computational power structures.” Digital Journalism 3.3 (2015): 398-415. http://bit.ly/.

  • This paper attempts to substantiate the notion of accountability for algorithms, particularly how they relate to media and journalism. It puts forward the notion of “algorithmic power,” analyzing the framework of influence such systems exert, and also introduces some of the challenges in the practice of algorithmic accountability, particularly for computational journalists.
  • Offers a basis for how algorithms can be analyzed, built in terms of the types of decisions algorithms make in prioritizing, classifying, associating, and filtering information.

Diakopoulos, Nicholas, and Michael Koliska. “Algorithmic transparency in the news media.” Digital Journalism (2016): 1-20. http://bit.ly/2hMvXdE.

  • This paper analyzes the increased use of “computational journalism,” and argues that though transparency remains a key tenet of journalism, the use of algorithms in gathering, producing and disseminating news undermines this principle.
  • It first analyzes what the ethical principle of transparency means to journalists and the media. It then highlights the findings from a focus-group study, where 50 participants from the news media and academia were invited to discuss three different case studies related to the use of algorithms in journalism.
  • They find two key barriers to algorithmic transparency in the media: “(1) a lack of business incentives for disclosure, and (2) the concern of overwhelming end-users with too much information.”
  • The study also finds a variety of opportunities for transparency across the “data, model, inference, and interface” components of an algorithmic system.

Napoli, Philip M. “The algorithm as institution: Toward a theoretical framework for automated media production and consumption.” Fordham University Schools of Business Research Paper (2013). http://bit.ly/2hKBHqo

  • This paper puts forward an analytical framework to discuss the algorithmic content creation of media and journalism in an attempt to “close the gap” on theory related to automated media production.
  • By borrowing concepts from institutional theory, the paper finds that algorithms are distinct forms of media institutions, and the cultural and political implications of this interpretation.
  • It urges further study in the field of “media sociology” to further unpack the influence of algorithms, and their role in institutionalizing certain norms, cultures and ways of thinking.

Introna, Lucas D., and Helen Nissenbaum. “Shaping the Web: Why the politics of search engines matters.” The Information Society 16.3 (2000): 169-185. http://bit.ly/2ijzsrg.

  • This paper, published 16 years ago, provides an in-depth account of some of the risks related to search engine optimizations, and the biases and harms these can introduce, particularly on the nature of politics.
  • Suggests search engines can be designed to account for these political dimensions, and better correlate with the ideal of the World Wide Web as being a place that is open, accessible and democratic.
  • According to the paper, policy (and not the free market) is the only way to spur change in this field, though the current technical solutions we have introduce further challenges.

Gillespie, Tarleton. “The Relevance of Algorithms.” Media
technologies: Essays on communication, materiality, and society (2014): 167. http://bit.ly/2h6ASEu.

  • This paper suggests that the extended use of algorithms, to the extent that they undercut many aspects of our lives, (Tarleton calls this public relevance algorithms) are fundamentally “producing and certifying knowledge.” In this ability to create a particular “knowledge logic,” algorithms are a primary feature of our information ecosystem.
  • The paper goes on to map 6 dimensions of these public relevance algorithms:
    • Patterns of inclusion
    • Cycles of anticipation
    • The evaluation of relevance
    • The promise of algorithmic objectivity
    • Entanglement with practice
    • The production of calculated publics
  • The paper concludes by highlighting the need for a sociological inquiry into the function, implications and contexts of algorithms, and to “soberly  recognize their flaws and fragilities,” despite the fact that much of their inner workings remain hidden.

Rainie, Lee and Janna Anderson. “Code-Dependent: Pros and Cons of the Algorithm Age.” Pew Research Center. February 8, 2017. http://bit.ly/2kwnvCo.

  • This Pew Research Center report examines the benefits and negative impacts of algorithms as they become more influential in different sectors and aspects of daily life.
  • Through a scan of the research and practice, with a particular focus on the research of experts in the field, Rainie and Anderson identify seven key themes of the burgeoning Algorithm Age:
    • Algorithms will continue to spread everywhere
    • Good things lie ahead
    • Humanity and human judgment are lost when data and predictive modeling become paramount
    • Biases exist in algorithmically-organized systems
    • Algorithmic categorizations deepen divides
    • Unemployment will rise; and
    • The need grows for algorithmic literacy, transparency and oversight

Tufekci, Zeynep. “Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency.” Journal on Telecommunications & High Technology Law 13 (2015): 203. http://bit.ly/1JdvCGo.

  • This paper establishes some of the risks and harms in regard to algorithmic computation, particularly in their filtering abilities as seen in Facebook and other social media algorithms.
  • Suggests that the editorial decisions performed by algorithms can have significant influence on our political and cultural realms, and categorizes the types of harms that algorithms may have on individuals and their society.
  • Takes two case studies–one from the social media coverage of the Ferguson protests, the other on how social media can influence election turnouts–to analyze the influence of algorithms. In doing so, this paper lays out the “tip of the iceberg” in terms of some of the challenges and ethical concerns introduced by algorithmic computing.

Mittelstadt, Brent, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society (2016): 3(2). http://bit.ly/2kWNwL6

  • This paper provides significant background and analysis of the ethical context of algorithmic decision-making. It primarily seeks to map the ethical consequences of algorithms, which have adopted the role of a mediator between data and action within societies.
  • Develops a conceptual map of 6 ethical concerns:
      • Inconclusive Evidence
      • Inscrutable Evidence
      • Misguided Evidence
      • Unfair Outcomes
      • Transformative Effects
    • Traceability
  • The paper then reviews existing literature, which together with the map creates a structure to inform future debate.

Governance

Janssen, Marijn, and George Kuk. “The challenges and limits of big data algorithms in technocratic governance.” Government Information Quarterly 33.3 (2016): 371-377. http://bit.ly/2hMq4z6.

  • In regarding the centrality of algorithms in enforcing policy and extending governance, this paper analyzes the “technocratic governance” that has emerged by the removal of humans from decision making processes, and the inclusion of algorithmic automation.
  • The paper argues that the belief in technocratic governance producing neutral and unbiased results, since their decision-making processes are uninfluenced by human thought processes, is at odds with studies that reveal the inherent discriminatory practices that exist within algorithms.
  • Suggests that algorithms are still bound by the biases of designers and policy-makers, and that accountability is needed to improve the functioning of an algorithm. In order to do so, we must acknowledge the “intersecting dynamics of algorithm as a sociotechnical materiality system involving technologies, data and people using code to shape opinion and make certain actions more likely than others.”

Just, Natascha, and Michael Latzer. “Governance by algorithms: reality construction by algorithmic selection on the Internet.” Media, Culture & Society (2016): 0163443716643157. http://bit.ly/2h6B1Yv.

  • This paper provides a conceptual framework on how to assess the governance potential of algorithms, asking how technology and software governs individuals and societies.
  • By understanding algorithms as institutions, the paper suggests that algorithmic governance puts in place more evidence-based and data-driven systems than traditional governance methods. The result is a form of governance that cares more about effects than causes.
  • The paper concludes by suggesting that algorithmic selection on the Internet tends to shape individuals’ realities and social orders by “increasing individualization, commercialization, inequalities, deterritorialization, and decreasing transparency, controllability, predictability.”

Consumer Finance

Hildebrandt, Mireille. “The dawn of a critical transparency right for the profiling era.” Digital Enlightenment Yearbook 2012 (2012): 41-56. http://bit.ly/2igJcGM.

  • Analyzes the use of consumer profiling by online businesses in order to target marketing and services to their needs. By establishing how this profiling relates to identification, the author also offers some of the threats to democracy and the right of autonomy posed by these profiling algorithms.
  • The paper concludes by suggesting that cross-disciplinary transparency is necessary to design more accountable profiling techniques that can match the extension of “smart environments” that capture ever more data and information from users.

Reddix-Smalls, Brenda. “Credit Scoring and Trade Secrecy: An Algorithmic Quagmire or How the Lack of Transparency in Complex Financial Models Scuttled the Finance Market.” UC Davis Business Law Journal 12 (2011): 87. http://bit.ly/2he52ch

  • Analyzes the creation of predictive risk models in financial markets through algorithmic systems, particularly in regard to credit scoring. It suggests that these models were corrupted in order to maintain a competitive market advantage: “The lack of transparency and the legal environment led to the use of these risk models as predatory credit pricing instruments as opposed to accurate credit scoring predictive instruments.”
  • The paper suggests that without greater transparency of these financial risk model, and greater regulation over their abuse, another financial crisis like that in 2008 is highly likely.

Justice

Aas, Katja Franko. “Sentencing Transparency in the Information Age.” Journal of Scandinavian Studies in Criminology and Crime Prevention 5.1 (2004): 48-61. http://bit.ly/2igGssK.

  • This paper questions the use of predetermined sentencing in the US judicial system through the application of computer technology and sentencing information systems (SIS). By assessing the use of these systems between the English speaking world and Norway, the author suggests that such technological approaches to sentencing attempt to overcome accusations of mistrust, uncertainty and arbitrariness often leveled against the judicial system.
  • However, in their attempt to rebuild trust, such technological solutions can be seen as an attempt to remedy a flawed view of judges by the public. Therefore, the political and social climate must be taken into account when trying to reform these sentencing systems: “The use of the various sentencing technologies is not only, and not primarily, a matter of technological development. It is a matter of a political and cultural climate and the relations of trust in a society.”

Cui, Gregory. “Evidence-Based Sentencing and the Taint of Dangerousness.” Yale Law Journal Forum 125 (2016): 315-315. http://bit.ly/1XLAvhL.

  • This short essay submitted on the Yale Law Journal Forum calls for greater scrutiny of “evidence based sentencing,” where past data is computed and used to predict future criminal behavior of a defendant. The author suggests that these risk models may undermine the Constitution’s prohibition of bills of attainder, and also are unlawful for inflicting punishment without a judicial trial.

Tools & Processes Toward Algorithmic Scrutiny

Ananny, Mike and Crawford, Kate. “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.” New Media & Society. SAGE Publications. 2016. http://bit.ly/2hvKc5x.

  • This paper attempts to critically analyze calls to improve the transparency of algorithms, asking how historically we are able to confront the limitations of the transparency ideal in computing.
  • By establishing “transparency as an ideal” the paper tracks the philosophical and historical lineage of this principle, attempting to establish what laws and provisions were put in place across the world to keep up with and enforce this ideal.
  • The paper goes on to detail the limits of transparency as an ideal, arguing, amongst other things, that it does not necessarily build trust, it privileges a certain function (seeing) over others (say, understanding) and that it has numerous technical limitations.
  • The paper ends by concluding that transparency is an inadequate way to govern algorithmic systems, and that accountability must acknowledge the ability to govern across systems.

Datta, Anupam, Shayak Sen, and Yair Zick. “Algorithmic Transparency via Quantitative Input Influence.Proceedings of 37th IEEE Symposium on Security and Privacy. 2016. http://bit.ly/2hgyLTp.

  • This paper develops what is called a family of Quantitative Input Influence (QII) measures “that capture the degree of influence of inputs on outputs of systems.” The attempt is to theorize a transparency report that is to accompany any algorithmic decisions made, in order to explain any decisions and detect algorithmic discrimination.
  • QII works by breaking “correlations between inputs to allow causal reasoning, and computes the marginal influence of inputs in situations where inputs cannot affect outcomes alone.”
  • Finds that these QII measures are useful in scrutinizing algorithms when “black box” access is available.

Goodman, Bryce, and Seth Flaxman. “European Union regulations on algorithmic decision-making and a right to explanationarXiv preprint arXiv:1606.08813 (2016). http://bit.ly/2h6xpWi.

  • This paper analyzes the implications of a new EU law, to be enacted in 2018, that calls to “restrict automated individual decision-making (that is, algorithms that make decisions based on user level predictors) which ‘significantly affect’ users.” The law will also allow for a “right to explanation” where users can ask for an explanation behind automated decision made about them.
  • The paper, while acknowledging the challenges in implementing such laws, suggests that such regulations can spur computer scientists to create algorithms and decision making systems that are more accountable, can provide explanations, and do not produce discriminatory results.
  • The paper concludes by stating algorithms and computer systems should not aim to be simply efficient, but also fair and accountable. It is optimistic about the ability to put in place interventions to account for and correct discrimination.

Kizilcec, René F. “How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2016. http://bit.ly/2hMjFUR.

  • This paper studies how transparency of algorithms affects our impression of trust by conducting an online field experiment, where participants enrolled in a MOOC a given different explanations for the computer generated grade given in their class.
  • The study found that “Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust.”
  • In conclusion, the study found that a balance of transparency was needed to maintain trust amongst the participants, suggesting that pure transparency of algorithmic processes and results may not correlate with high feelings of trust amongst users.

Kroll, Joshua A., et al. “Accountable Algorithms.” University of Pennsylvania Law Review 165 (2016). http://bit.ly/2i6ipcO.

  • This paper suggests that policy and legal standards need to be updated given the increased use of algorithms to perform tasks and make decisions in arenas that people once did. An “accountability mechanism” is lacking in many of these automated decision making processes.
  • The paper argues that mere transparency through the divulsion of source code is inadequate when confronting questions of accountability. Rather, technology itself provides a key to create algorithms and decision making apparatuses more inline with our existing political and legal frameworks.
  • The paper assesses some computational techniques that may provide possibilities to create accountable software and reform specific cases of automated decisionmaking. For example, diversity and anti-discrimination orders can be built into technology to ensure fidelity to policy choices.

Digital Nudging: Altering User Behavior in Digital Environments


Tobias Mirsch,Christiane Lehrer, and Reinhard Jung in Wirtschaftsinformatik: “Individuals make increasingly more decisions on screens, such as those on websites or mobile apps. However, the nature of screens and the vast amount of information available online make individuals particularly prone to deficient decisions. Digital nudging is an approach based on insights from behavioral economics that applies user interface (UI) design elements to affect the choices of users in digital environments. UI design elements include graphic design, specific content, wording or small features. To date, little is known about the psychological mechanisms that underlie digital nudging. To address this research gap, we conducted a systematic literature review and provide a comprehensive overview of relevant psychological effects and exemplary nudges in the physical and digital sphere. These insights serve as a valuable basis for researchers and practitioners that aim to study or design information systems and interventions that assist user decision making on screens….(More)”

Forged Through Fire


Book by John Ferejohn and Frances McCall Rosenbluth: “Peace, many would agree, is a goal that democratic nations should strive to achieve. But is democracy, in fact, dependent on war to survive?

Having spent their celebrated careers exploring this provocative question, John Ferejohn and Frances McCall Rosenbluth trace the surprising ways in which governments have mobilized armies since antiquity, discovering that our modern form of democracy not only evolved in a brutally competitive environment but also quickly disintegrated when the powerful elite no longer needed their citizenry to defend against existential threats.?

Bringing to vivid life the major battles that shaped our current political landscape, the authors begin with the fierce warrior states of Athens and the Roman Republic. While these experiments in “mixed government” would serve as a basis for the bargain between politics and protection at the heart of modern democracy, Ferejohn and Rosenbluth brilliantly chronicle the generations of bloodshed that it would take for the world’s dominant states to hand over power to the people. In fact, for over a thousand years, even as medieval empires gave way to feudal Europe, the king still ruled. Not even the advancements of gunpowder—which decisively tipped the balance away from the cavalry-dominated militaries and in favor of mass armies—could threaten the reign of monarchs and “landed elites” of yore.?

The incredibly wealthy, however, were not well equipped to handle the massive labor classes produced by industrialization. As we learn, the Napoleonic Wars stoked genuine, bottom-up nationalism and pulled splintered societies back together as “commoners” stepped up to fight for their freedom. Soon after, Hitler and Stalin perfectly illustrated the military limitations of dictatorships, a style of governance that might be effective for mobilizing an army but not for winning a world war. This was a lesson quickly heeded by the American military, who would begin to reinforce their ranks with minorities in exchange for greater civil liberties at home.?

Like Francis Fukuyama and Jared Diamond’s most acclaimed works, Forged Through Fire concludes in the modern world, where the “tug of war” between the powerful and the powerless continues to play out in profound ways. Indeed, in the covert battlefields of today, drones have begun to erode the need for manpower, giving politicians even less incentive than before to listen to the demands of their constituency. With American democracy’s flanks now exposed, this urgent examination explores the conditions under which war has promoted one of the most cherished human inventions: a government of the people, by the people, for the people. The result promises to become one of the most important history books to emerge in our time….(More)”

Democracy Index 2016


The annual review by the Economist Intelligence Unit: “According to the 2016 Democracy Index almost one-half of the world’s countries can be considered to be democracies of some sort, but the number of “full democracies” has declined from 20 in 2015 to 19 in 2016. The US has been downgraded from a “full democracy” to a “flawed democracy” because of a further erosion of trust in government and elected officials there.

The “democratic recession” worsened in 2016, when no region experienced an improvement in its average score and almost twice as many countries (72) recorded a decline in their total score as recorded an improvement (38). Eastern Europe experienced the most severe regression. The 2016 Democracy Index report, Revenge of the “deplorables”, examines the deep roots of today’s crisis of democracy in the developed world, and looks at how democracy fared in every region….(More)