The gamification of governance – claims this paper – shows great potential to foster civic engagement and encourage participation in policy-making. The data around the general publics’ response and perception to game-design incentives are encouraging. Yet – argues this paper – gamification is not without risks. Various challenges are posed by gamified policy-making, particularly with regards to security and inclusiveness (i.e. do gamified policies conform to recognized security and privacy standards? Are they sufficiently inclusive?). Additionally, concerns surround the quality of public’s response to gamified incentives (i.e. is gamification merely encouraging low-risk/low-cost engagement, or does it genuinely drive public participation, both online and offline?). Questions have also been raised about the longevity and duration of engagement – are game-design elements fostering long-term, durable, civic engagement, or do they merely encourage one-time, occasional, participation? This paper develops around five concepts that are key to understanding the link between gamification with civic engagement and public sector’s innovation. The first is “Reputation”, followed by “Automation” and “Structure”. The fourth and fifth consist of “Nudging” and “Crowdsourcing”, respectively. Alongside the analysis of these concepts, and their respective interplay, the paper provides an empirical account of efforts to ‘gamify’ public policies, at both national and supranational levels; it illustrates the outcomes that public regulators expect from efforts with gamification; and it considers the weaknesses, both practical and theoretical, related to the use of
Cities, Government, Law, and Civil Society
Perhaps the role of cities in civil society has been neglected by the legal academy because cities are not sovereigns. Sovereignty has often been the issue that provokes theoretical attention to government and its role in civil life. At the heart of the federal-national account of civil society and government is the potential threat the sovereign poses to other actors in civil society. But there is no necessary connection between concentrating on the nature and workings of sovereignty and considering the role for government and law in civil society. And when a government is not a sovereign, its ability to threaten is inherently constrained. That is what examining cities, non-sovereign governments embedded in a web of other governments, shows us.
When we turn our attention to cities, a very different role for government and law emerges. Cities often exemplify how government and law can enable civil society and all those encompassed by it. They show how
Beyond the IRB: Towards a typology of research ethics in applied economics
We discuss ethical practices including IRB approvals, which focuses almost entirely on risks to subjects; pre-analysis plans and conflict of interest disclosures, which encourage transparency so as to not mislead editors, reviewers, and readers; and self-plagiarism, which has become
Innovation Contests: How to Engage Citizens in Solving Urban Problems?
Understanding Smart Cities: Innovation ecosystems, technological advancements, and societal challenges
Introduction of Special Issue of Technological Forecasting and Social Change by Francesco PaoloAppio, MarcosLima, and Sotirios Paroutis: “Smart Cities initiatives are spreading all around the globe at a phenomenal pace. Their bold ambition is to increase the competitiveness of local communities through innovation while increasing the quality of life for its citizens through better public services and a cleaner environment. Prior research has shown contrasting views and a multitude of dimensions and approaches to look at this phenomenon. In spite of the fact that this can stimulate the debate, it lacks a systematic assessment and an integrative view. The papers in the special issue on “Understanding Smart Cities: Innovation Ecosystems, Technological Advancements, and Societal Challenges” take stock of past work and provide new insights through the lenses of a hybrid framework. Moving from these premises, we offer an overview of the topic by featuring possible linkages and thematic clusters. Then, we sketch a novel research agenda for scholars, practitioners, and
Too Many Secrets? When Should the Intelligence Community be Allowed to Keep Secrets?
Ross W. Bellaby in Polity: “In recent years, revelations regarding reports of torture by the U.S. Central Intelligence Agency and the quiet growth of the National Security Agency’s pervasive cyber-surveillance system have brought into doubt the level of trust afforded to the intelligence community. The question of its trustworthiness requires determining how much secrecy it should enjoy and what mechanisms should be employed to detect and prevent future abuse. My argument is not a call for complete transparency, however, as secret intelligence does play an important and ethical role in society. Rather, I argue that existing systems built on
Creating value through data collaboratives
Paper by Klievink, Bram, van der Voort, Haiko and Veeneman, Wijnand: “Driven by the technological capabilities that ICTs offer, data enable new ways to generate value for both society and the parties that own or offer the data. This article looks at the idea of data collaboratives as a form of cross-sector partnership to exchange and integrate data and data
To understand how data collaboratives can add value in a public governance context, we exploratively studied the qualitative longitudinal case of an
Sludge and Ordeals
Paper by Cass R. Sunstein: “In 2015, the United States government imposed 9.78 billion hours of paperwork burdens on the American people. Many of these hours are best categorized as “sludge,” reducing access to important licenses, programs, and benefits. Because of the sheer costs of sludge, rational people are effectively denied life-changing goods and services; the problem is compounded by the existence of behavioral biases, including inertia, present bias, and unrealistic optimism. In principle, a serious deregulatory effort should be undertaken to reduce sludge, through automatic enrollment, greatly simplified forms, and reminders. At the same time, sludge can promote legitimate goals.
First, it can protect program integrity, which means that policymakers might have to make difficult tradeoffs between (1) granting benefits to people who are not entitled to them and (2) denying benefits to people who are entitled to them. Second, it can overcome impulsivity, recklessness, and self-control problems. Third, it can prevent intrusions on privacy. Fourth, it can serve as a rationing device, ensuring that benefits go to people who most need them. In most cases, these defenses of sludge turn out to be more attractive in principle than in practice.
For sludge, a form of cost-benefit analysis is essential, and it will often argue in favor of a neglected form of deregulation: sludge reduction. For both public and private institutions
On the privacy-conscientious use of mobile phone data
Yves-Alexandre de Montjoye et al in Nature: “The breadcrumbs we leave behind when using our mobile phones—who somebody calls, for how long, and from where—contain unprecedented insights about us and our societies. Researchers have compared the recent availability of large-scale behavioral datasets, such as the ones generated by mobile phones, to the invention of the microscope, giving rise to the new field of computational social science.
With mobile phone penetration rates reaching 90% and under-resourced national statistical agencies, the data generated by our phones—traditional Call Detail Records (CDR) but also high-frequency x-Detail Record (xDR)—have the potential to become a primary data source to tackle crucial humanitarian questions in low- and middle-income countries. For instance, they have already been used to monitor population displacement after disasters, to provide real-time traffic information, and to improve our understanding of the dynamics of infectious diseases. These data are also used by governmental and industry practitioners in high-income countries.
While there is little doubt on the potential of mobile phone data for good, these data contain intimate details of our lives: rich information about our whereabouts, social life, preferences, and potentially even finances. A BCG study showed, e.g., that 60% of Americans consider location data and phone number history—both available in mobile phone data—as “private”.
Historically and legally, the balance between the societal value of statistical data (in aggregate) and the protection of privacy of individuals has been achieved through data anonymization. While hundreds of different anonymization algorithms exist, most of them are variations and improvements of the seminal k-anonymity algorithm introduced in 1998. Recent studies have, however, shown that pseudonymization and standard de-identification are not sufficient to prevent users from being re-identified in mobile phone data. Four data points—approximate places and times where an individual was present—have been shown to be enough to uniquely re-identify them 95% of the time in a mobile phone dataset of 1.5 million people. Furthermore, re-identification estimations using unicity—a metric to evaluate the risk of re-identification in large-scale datasets—and attempts at k-anonymizing mobile phone data ruled out de-identification as sufficient to truly anonymize the data. This was echoed in the recent report of the [US] President’s Council of Advisors on Science and Technology on Big Data Privacy which consider de-identification to be useful as an “added safeguard, but [emphasized that] it is not robust against near-term future re-identification methods”.
The limits of the historical de-identification framework to adequately balance risks and benefits in the use of mobile phone data are a major hindrance to their use by researchers, development practitioners, humanitarian workers, and companies. This became particularly clear at the height of the Ebola crisis, when qualified researchers (including some of us) were prevented from accessing relevant mobile phone data on time despite efforts by mobile phone operators, the GSMA, and UN agencies, with privacy being cited as one of the main concerns.
These privacy concerns are, in our opinion, due to the failures of the traditional de-identification model and the lack of a modern and agreed upon framework for the privacy-conscientious use of mobile phone data by third-parties especially in the context of the EU General Data Protection Regulation (GDPR). Such frameworks have been developed for the anonymous use of other sensitive data such as census, household survey, and tax data. The positive societal impact of making these data accessible and the technical means available to protect people’s identity have been considered and a trade-off, albeit far from perfect, has been agreed on and implemented. This has allowed the data to be used in aggregate for the benefit of society. Such thinking and an agreed upon set of models has been missing so far for mobile phone data. This has left data protection authorities, mobile phone operators, and data users with little guidance on technically sound yet reasonable models for the privacy-conscientious use of mobile phone data. This has often resulted in suboptimal tradeoffs if any.
In this paper, we propose four models for the privacy-conscientious use of mobile phone data (Fig. 1). All of these models 1) focus on a use of mobile phone data in which only statistical, aggregate information is ultimately needed by a third-party and, while this needs to be confirmed on a per-country basis, 2) are designed to fall under the legal umbrella of “anonymous use of the data”. Examples of cases in which only statistical aggregated information is ultimately needed by the third-party are discussed below. They would include, e.g., disaster management, mobility analysis, or the training of AI algorithms in which only aggregate information on people’s mobility is ultimately needed by agencies, and exclude cases in which individual-level identifiable information is needed such as targeted advertising or loans based on behavioral data.

First, it is important to insist that none of these models is a silver bullet…(More)”.
Distributed, privacy-enhancing technologies in the 2017 Catalan referendum on independence: New tactics and models of participatory democracy
M. Poblet at First Monday: “This paper examines new civic engagement practices unfolding during the 2017 referendum on independence in Catalonia. These practices constitute one of the first signs of some emerging trends in the use of the Internet for civic and political action: the adoption of horizontal, distributed, and privacy-enhancing technologies that rely on P2P networks and advanced cryptographic tools. In this regard, the case of the 2017 Catalan referendum, framed within conflicting political dynamics, can be considered a first-of-its kind in participatory democracy. The case also offers an opportunity to reflect on an interesting paradox that twenty-first century activism will face: the more it will rely on private-friendly, secured, and encrypted networks, the more open, inclusive, ethical, and transparent it will need to be….(More)”.