When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance


Paper by David Rozas , Antonio Tenorio-Fornés , Silvia Díaz-Molina , and Samer Hassan: “Blockchain technologies have generated excitement, yet their potential to enable new forms of governance remains largely unexplored. Two confronting standpoints dominate the emergent debate around blockchain-based governance: discourses characterised by the presence of techno-determinist and market-driven values, which tend to ignore the complexity of social organisation; and critical accounts of such discourses which, whilst contributing to identifying limitations, consider the role of traditional centralised institutions as inherently necessary to enable democratic forms of governance. Therefore the question arises, can we build perspectives of blockchain-based governance that go beyond markets and states?

In this article we draw on the Nobel laureate economist Elinor Ostrom’s principles for self-governance of communities to explore the transformative potential of blockchain. We approach blockchain through the identification and conceptualisation of affordances that this technology may provide to communities. For each affordance, we carry out a detailed analysis situating each in the context of Ostrom’s principles, considering both the potentials of algorithmic governance and the importance of incorporating communities’ social practices. The relationships found between these affordances and Ostrom’s principles allow us to provide a perspective focussed on blockchain-based commons governance. By carrying out this analysis, we aim to expand the debate from one dominated by a culture of competition to one that promotes a culture of cooperation…(More)”.

Accountability in the Age of the Artificial


2019 Solomon Lecture by Fiona McLeod: “Our aspiration for open and accountable government faces innumerable challenges, not least the natural reluctance of all governments to expose themselves to criticism and accept responsibility for failure.

Time and again, corporate and political goals take priority over just outcomes, and the human rights of individuals and communities are undervalued and ignored.

Numerous examples of bad behaviour shock us for a while, some even receiving the focused attention of high quality investigative journalism and Royal Commissions, but we are left unsatisfied, cynical and disengaged, more jaded than before, accepting the inevitability of existential threats, the comfort of algorithmic news feeds and vague promises to ‘drain the swamp’.

In this context, are big data and artificial intelligence the enemies of the people, the ultimate tools of the oligarch, or the vital tools needed to eliminate bias, improve scrutiny and just outcomes for the visionary?  Is there a future in which humanity evolves alongside an enhanced hive-mind in time to avert global catastrophe and create a new vision for humanity?…(More)”

Citizens need to know numbers


David Spiegelhalter at Aeon: “…Many criticised the Leave campaign for its claim that Britain sends the EU £350 million a week. When Boris Johnson repeated it in 2017 – by which time he was Foreign Secretary – the chair of the UK Statistics Authority (the official statistical watchdog) rebuked him, noting it was a ‘clear misuse of official statistics’. A private criminal prosecution was even made against Johnson for ‘misconduct in a public office’, but it was halted by the High Court.

The message on the bus had a strong emotional resonance with millions of people, even though it was essentially misinformation. The episode demonstrates both the power and weakness of statistics: they can be used to amplify an entire worldview, and yet they often do not stand up to scrutiny. This is why statistical literacy is so important – in an age in which data plays an ever-more prominent role in society, the ability to spot ways in which numbers can be misused, and to be able to deconstruct claims based on statistics, should be a standard civic skill.

Statistics are not cold hard facts – as Nate Silver writes in The Signal and the Noise (2012): ‘The numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning.’ Not only has someone used extensive judgment in choosing what to measure, how to define crucial ideas, and to analyse them, but the manner in which they are communicated can utterly change their emotional impact. Let’s assume that £350 million is the actual weekly contribution to the EU. I often ask audiences to suggest what they would put on the side of the bus if they were on the Remain side. A standard option for making an apparently big number look small is to consider it as a proportion of an even bigger number: for example, the UK’s GDP is currently around £2.3 trillion, and so this contribution would comprise less than 1 per cent of GDP, around six months’ typical growth. An alternative device is to break down expenditure into smaller, more easily grasped units: for example, as there are 66 million people in the UK, £350 million a week is equivalent to around 75p a day, less than $1, say about the cost of a small packet of crisps (potato chips). If the bus had said: We each send the EU the price of a packet of crisps each day, the campaign might not have been so successful.

Numbers are often used to persuade rather than inform, statistical literacy needs to be improved, and so surely we need more statistics courses in schools and universities? Well, yes, but this should not mean more of the same. After years of researching and teaching statistical methods, I am not alone in concluding that the way in which we teach statistics can be counterproductive, with an overemphasis on mathematical foundations through probability theory, long lists of tests and formulae to apply, and toy problems involving, say, calculating the standard deviation of the weights of cod. The American Statistical Association’s Guidelines for Assessment and Instruction in Statistics Education (2016) strongly recommended changing the pedagogy of statistics into one based on problemsolving, real-world examples, and with an emphasis on communication….(More)”.

Experimental Innovation Policy


Paper by Albert Bravo-Biosca: “Experimental approaches are increasingly being adopted across many policy fields, but innovation policy has been lagging. This paper reviews the case for policy experimentation in this field, describes the different types of experiments that can be undertaken, discusses some of the unique challenges to the use of experimental approaches in innovation policy, and summarizes some of the emerging lessons, with a focus on randomized trials. The paper concludes describing how at the Innovation Growth Lab we have been working with governments across the OECD to help them overcome the barriers to policy experimentation in order to make their policies more impactful….(More)”.

To Regain Policy Competence: The Software of American Public Problem-Solving


Philip Zelikow at the Texas National Security Review: “Policymaking is a discipline, a craft, and a profession. Policymakers apply specialized knowledge — about other countries, politics, diplomacy, conflict, economics, public health, and more — to the practical solution of public problems. Effective policymaking is difficult. The “hardware” of policymaking — the tools and structures of government that frame the possibilities for useful work — are obviously important. Less obvious is that policy performance in practice often rests more on the “software” of public problem-solving: the way people size up problems, design actions, and implement policy. In other words, the quality of the policymaking.

Like policymaking, engineering is a discipline, a craft, and a profession. Engineers learn how to apply specialized knowledge — about chemistry, physics, biology, hydraulics, electricity, and more — to the solution of practical problems. Effective engineering is similarly difficult. People work hard to learn how to practice it with professional skill. But, unlike the methods taught for engineering, the software of policy work is rarely recognized or studied. It is not adequately taught. There is no canon or norms of professional practice. American policymaking is less about deliberate engineering, and is more about improvised guesswork and bureaucratized habits.

My experience is as a historian who studies the details of policy episodes and the related staff work, but also as a former official who has analyzed a variety of domestic and foreign policy issues at all three levels of American government, including federal work from different bureaucratic perspectives in five presidential administrations from Ronald Reagan to Barack Obama. From this historical and contemporary vantage point, I am struck (and a bit depressed) that the quality of U.S. policy engineering is actually much, much worse in recent decades than it was throughout much of the 20th century. This is not a partisan observation — the decline spans both Republican and Democratic administrations.

I am not alone in my observations. Francis Fukuyama recently concluded that, “[T]he overall quality of the American government has been deteriorating steadily for more than a generation,” notably since the 1970s. In the United States, “the apparently irreversible increase in the scope of government has masked a large decay in its quality.”1 This worried assessment is echoed by other nonpartisan and longtime scholars who have studied the workings of American government.2 The 2003 National Commission on Public Service observed,

The notion of public service, once a noble calling proudly pursued by the most talented Americans of every generation, draws an indifferent response from today’s young people and repels many of the country’s leading private citizens. … The system has evolved not by plan or considered analysis but by accretion over time, politically inspired tinkering, and neglect. … The need to improve performance is urgent and compelling.3

And they wrote that as the American occupation of Iraq was just beginning.

In this article, I offer hypotheses to help explain why American policymaking has declined, and why it was so much more effective in the mid-20th century than it is today. I offer a brief sketch of how American education about policy work evolved over the past hundred years, and I argue that the key software qualities that made for effective policy engineering neither came out of the academy nor migrated back into it.

I then outline a template for doing and teaching policy engineering. I break the engineering methods down into three interacting sets of analytical judgments: about assessment, design, and implementation. In teaching, I lean away from new, cumbersome standalone degree programs and toward more flexible forms of education that can pair more easily with many subject-matter specializations. I emphasize the value of practicing methods in detailed and more lifelike case studies. I stress the significance of an organizational culture that prizes written staff work of the quality that used to be routine but has now degraded into bureaucratic or opinionated dross….(More)”.

#Kremlin: Using Hashtags to Analyze Russian Disinformation Strategy and Dissemination on Twitter


Paper by Sarah Oates, and John Gray: “Reports of Russian interference in U.S. elections have raised grave concerns about the spread of foreign disinformation on social media sites, but there is little detailed analysis that links traditional political communication theory to social media analytics. As a result, it is difficult for researchers and analysts to gauge the nature or level of the threat that is disseminated via social media. This paper leverages both social science and data science by using traditional content analysis and Twitter analytics to trace how key aspects of Russian strategic narratives were distributed via #skripal, #mh17, #Donetsk, and #russophobia in late 2018.

This work will define how key Russian international communicative goals are expressed through strategic narratives, describe how to find hashtags that reflect those narratives, and analyze user activity around the hashtags. This tests both how Twitter amplifies specific information goals of the Russians as well as the relative success (or failure) of particular hashtags to spread those messages effectively. This research uses Mentionmapp, a system co-developed by one of the authors (Gray) that employs network analytics and machine intelligence to identify the behavior of Twitter users as well as generate profiles of users via posting history and connections. This study demonstrates how political communication theory can be used to frame the study of social media; how to relate knowledge of Russian strategic priorities to labels on social media such as Twitter hashtags; and to test this approach by examining a set of Russian propaganda narratives as they are represented by hashtags. Our research finds that some Twitter users are consistently active across multiple Kremlin-linked hashtags, suggesting that knowledge of these hashtags is an important way to identify Russian propaganda online influencers. More broadly, we suggest that Twitter dichotomies such as bot/human or troll/citizen should be used with caution and analysis should instead address the nuances in Twitter use that reflect varying levels of engagement or even awareness in spreading foreign disinformation online….(More)”.

Algorithmic Censorship on Social Platforms: Power, Legitimacy, and Resistance


Paper by Jennifer Cobbe: “Effective content moderation by social platforms has long been recognised as both important and difficult, with numerous issues arising from the volume of information to be dealt with, the culturally sensitive and contextual nature of that information, and the nuances of human communication. Attempting to scale moderation efforts, various platforms have adopted, or signalled their intention to adopt, increasingly automated approaches to identifying and suppressing content and communications that they deem undesirable. However, algorithmic forms of online censorship by social platforms bring their own concerns, including the extensive surveillance of communications and the use of machine learning systems with the distinct possibility of errors and biases. This paper adopts a governmentality lens to examine algorithmic censorship by social platforms in order to assist in the development of a more comprehensive understanding of the risks of such approaches to content moderation. This analysis shows that algorithmic censorship is distinctive for two reasons: (1) it would potentially bring all communications carried out on social platforms within reach, and (2) it would potentially allow those platforms to take a much more active, interventionist approach to moderating those communications. Consequently, algorithmic censorship could allow social platforms to exercise an unprecedented degree of control over both public and private communications, with poor transparency, weak or non-existent accountability mechanisms, and little legitimacy. Moreover, commercial considerations would be inserted further into the everyday communications of billions of people. Due to the dominance of the web by a small number of social platforms, this control may be difficult or impractical to escape for many people, although opportunities for resistance do exist.

While automating content moderation may seem like an attractive proposition for both governments and platforms themselves, the issues identified in this paper are cause for concern and should be given serious consideration.Jennifer CobbeEffective content moderation by social platforms has long been recognised as both important and difficult, with numerous issues arising from the volume of information to be dealt with, the culturally sensitive and contextual nature of that information, and the nuances of human communication. Attempting to scale moderation efforts, various platforms have adopted, or signalled their intention to adopt, increasingly automated approaches to identifying and suppressing content and communications that they deem undesirable. However, algorithmic forms of online censorship by social platforms bring their own concerns, including the extensive surveillance of communications and the use of machine learning systems with the distinct possibility of errors and biases. This paper adopts a governmentality lens to examine algorithmic censorship by social platforms in order to assist in the development of a more comprehensive understanding of the risks of such approaches to content moderation.

This analysis shows that algorithmic censorship is distinctive for two reasons: (1) it would potentially bring all communications carried out on social platforms within reach, and (2) it would potentially allow those platforms to take a much more active, interventionist approach to moderating those communications. Consequently, algorithmic censorship could allow social platforms to exercise an unprecedented degree of control over both public and private communications, with poor transparency, weak or non-existent accountability mechanisms, and little legitimacy. Moreover, commercial considerations would be inserted further into the everyday communications of billions of people. Due to the dominance of the web by a small number of social platforms, this control may be difficult or impractical to escape for many people, although opportunities for resistance do exist. While automating content moderation may seem like an attractive proposition for both governments and platforms themselves, the issues identified in this paper are cause for concern and should be given serious consideration….(More)”.

Next100


PressRelease: “Next100, a new “startup” think tank built for and by the next generation of policy leaders, officially launched today with the announcement of its inaugural class of eight “Policy Entrepreneurs,” selected from a highly competitive pool of more than 740 applicants. These eight rising leaders will spend the next two years researching and developing policy solutions to the issues that matter most to the next generation, focusing in particular on: education, immigration, criminal justice, climate change, economic opportunity, and the intersections between such issues.

Next100 was announced as an independent think tank earlier this year by The Century Foundation (TCF), in celebration of TCF’s 100th anniversary. It is built as a different type of “think and do” tank — both in terms of the people, perspectives, and policy areas represented, as well as its approach to advancing policy change. The organization’s mission is to change the face and future of progressive policy, through making the policymaking space more inclusive of diverse, next generation voices, and by helping emerging leaders translate their creative policy ideas into tangible policy change.

“The next generation is too often and too easily excluded from the policymaking table, despite having the most at stake in the decisions made at that table,” said Emma Vadehra, executive director of Next100. “As a result, we end up with the same people, with the same ideas, trying to solve the same problems, in the same ways. Next100 is trying to change that, and reimagine what a think tank can and should be. We’re giving diverse leaders of the next generation a chance to cut through the inertia and bring their unmatched creativity, knowledge, skills, and experiences to bear on the policymaking process. Policy by those with the most at stake, for those with the most at stake.”…(More)”.

Politics, Bureaucracy and Successful Governance


Inaugural lecture by K.J. Meier: “One of the major questions, perhaps the major question, in the field of public administration is how to reconcile the need for bureaucracy with the democratic process. Bureaucracies after all are not seen as democratic institutions and operate based on hierarchy and expertise rather than popular will (see Mosher 1968). I take a distinctly minority view in the field, seeing bureaucracy not so much as a threat to democracy in existing mature democracies but as a necessary precondition for the existence of democracy in modern society (Meier 1997).

Democracy is a system of governance with high transactions costs that seeks democratic ideals of representation, equity, and fairness with only modest, if any, concern for efficiency. Effective bureaucracies are the institutions that produce the outcomes that build public support for democracy and in a sense generate the surplus that allows democratic processes to survive and flourish. Although bureaucracies may have none of the trappings of democracy internally, their role in contributing to democratic governance means that they should also be considered democratic institutions. Scholars, politicians, and citizens need to be concerned about preserving and protecting bureaucracy just as they seek to preserve and protect our official institutions of democracy.

Within the general theme of bureaucracy and democracy, this lecture will address two major concerns – (1) the failure of politics which severs the crucial link between voters and elected officials and poses major challenges to bureaucrats seeking to administer effective programs, and (2) the subsequent need for bureaucracy to also become an institution that represents the public. Within this concern about bureaucratic representation, the lecture will address how bureaucracies can assess the needs of citizens, and more narrowly how representative bureaucracy can be and is instrumental to the bureaucracy, and finally the limits of symbolic representation within bureaucracies….(More)”.

Capturing citizen voice online: Enabling smart participatory local government


Tooran Alizadeh, Somwrita Sarkar and Sandy Burgoyne in Cities: “Social media and online communication have changed the way citizens engage in all aspects of lives from shopping and education, to how communities are planned and developed. It is no longer one-way or two- way communication. Instead, via networked all-to-all communication channels, our citizens engage on urban issues in a complex and more connected way than ever before. So government needs new ways to listen to its citizens. The paper comprises three components. Firstly, we build on the growing discussions in the literature focused on smart cities, on one hand, and social media research, on the other, to capture the diversity of citizen voices and better inform decision-making. Secondly, with the support of the Australian Federal Government and in collaboration with the local government partners, we collect citizen voices from Twitter on selected urban projects. Thirdly, we present preliminary findings in terms of quantity and quality of publicly available online data representing citizen concerns on the urban matters. By analyzing the sentiments of the citizen voices captured online, clustering them into topic areas, and then reevaluating citizen’s sentiments within each cluster, we elaborate the scope and value of technologically-enabled opportunities in terms of enabling participatory local government decision making processes….(More)”.