Understanding Smart Cities: Innovation ecosystems, technological advancements, and societal challenges


Introduction of Special Issue of Technological Forecasting and Social Change by Francesco PaoloAppio, MarcosLima, and Sotirios Paroutis: “Smart Cities initiatives are spreading all around the globe at a phenomenal pace. Their bold ambition is to increase the competitiveness of local communities through innovation while increasing the quality of life for its citizens through better public services and a cleaner environment. Prior research has shown contrasting views and a multitude of dimensions and approaches to look at this phenomenon. In spite of the fact that this can stimulate the debate, it lacks a systematic assessment and an integrative view. The papers in the special issue on “Understanding Smart Cities: Innovation Ecosystems, Technological Advancements, and Societal Challenges” take stock of past work and provide new insights through the lenses of a hybrid framework. Moving from these premises, we offer an overview of the topic by featuring possible linkages and thematic clusters. Then, we sketch a novel research agenda for scholars, practitioners, and policy makers who wish to engage in – and build – a critical, constructive, and conducive discourse on Smart Cities….(More)”.

Too Many Secrets? When Should the Intelligence Community be Allowed to Keep Secrets?


Ross W. Bellaby in Polity: “In recent years, revelations regarding reports of torture by the U.S. Central Intelligence Agency and the quiet growth of the National Security Agency’s pervasive cyber-surveillance system have brought into doubt the level of trust afforded to the intelligence community. The question of its trustworthiness requires determining how much secrecy it should enjoy and what mechanisms should be employed to detect and prevent future abuse. My argument is not a call for complete transparency, however, as secret intelligence does play an important and ethical role in society. Rather, I argue that existing systems built on a prioritization of democratic assumptions are fundamentally ill-equipped for dealing with the particular challenge of intelligence secrecy. As the necessary circle of secrecy is extended, political actors are insulated from the very public gaze that ensures they are working in line with the political community’s best interests. Therefore, a new framework needs to be developed, one that this article argues should be based on the just war tradition, where the principles of just cause, legitimate authority, last resort, proportionality, and discrimination are able to balance the secrecy that the intelligence community needs in order to detect and prevent threats with the harm that too much or incorrect secrecy can cause to people….(More)”.

Creating value through data collaboratives


Paper by  Klievink, Bram, van der Voort, Haiko and Veeneman, Wijnand: “Driven by the technological capabilities that ICTs offer, data enable new ways to generate value for both society and the parties that own or offer the data. This article looks at the idea of data collaboratives as a form of cross-sector partnership to exchange and integrate data and data use to generate public value. The concept thereby bridges data-driven value creation and collaboration, both current themes in the field.

To understand how data collaboratives can add value in a public governance context, we exploratively studied the qualitative longitudinal case of an infomobility platform. We investigated the ability of a data collaborative to produce results while facing significant challenges and tensions between the goals of parties, each having the conflicting objectives of simultaneously retaining control whilst allowing for generativity. Taken together, the literature and case study findings help us to understand the emergence and viability of data collaboratives. Although limited by this study’s explorative nature, we find that conditions such as prior history of collaboration and supportive rules of the game are key to the emergence of collaboration. Positive feedback between trust and the collaboration process can institutionalise the collaborative, which helps it survive if conditions change for the worse….(More)”.

Sludge and Ordeals


Paper by Cass R. Sunstein: “In 2015, the United States government imposed 9.78 billion hours of paperwork burdens on the American people. Many of these hours are best categorized as “sludge,” reducing access to important licenses, programs, and benefits. Because of the sheer costs of sludge, rational people are effectively denied life-changing goods and services; the problem is compounded by the existence of behavioral biases, including inertia, present bias, and unrealistic optimism. In principle, a serious deregulatory effort should be undertaken to reduce sludge, through automatic enrollment, greatly simplified forms, and reminders. At the same time, sludge can promote legitimate goals.

First, it can protect program integrity, which means that policymakers might have to make difficult tradeoffs between (1) granting benefits to people who are not entitled to them and (2) denying benefits to people who are entitled to them. Second, it can overcome impulsivity, recklessness, and self-control problems. Third, it can prevent intrusions on privacy. Fourth, it can serve as a rationing device, ensuring that benefits go to people who most need them. In most cases, these defenses of sludge turn out to be more attractive in principle than in practice.

For sludge, a form of cost-benefit analysis is essential, and it will often argue in favor of a neglected form of deregulation: sludge reduction. For both public and private institutions,“Sludge Audits” should become routine. Various suggestions are offered for new action by the Office of Information and Regulatory Affairs, which oversees the Paperwork Reduction Act; for courts; and for Congress…(More)”.

On the privacy-conscientious use of mobile phone data


Yves-Alexandre de Montjoye et al in Nature: “The breadcrumbs we leave behind when using our mobile phones—who somebody calls, for how long, and from where—contain unprecedented insights about us and our societies. Researchers have compared the recent availability of large-scale behavioral datasets, such as the ones generated by mobile phones, to the invention of the microscope, giving rise to the new field of computational social science.

With mobile phone penetration rates reaching 90% and under-resourced national statistical agencies, the data generated by our phones—traditional Call Detail Records (CDR) but also high-frequency x-Detail Record (xDR)—have the potential to become a primary data source to tackle crucial humanitarian questions in low- and middle-income countries. For instance, they have already been used to monitor population displacement after disasters, to provide real-time traffic information, and to improve our understanding of the dynamics of infectious diseases. These data are also used by governmental and industry practitioners in high-income countries.

While there is little doubt on the potential of mobile phone data for good, these data contain intimate details of our lives: rich information about our whereabouts, social life, preferences, and potentially even finances. A BCG study showed, e.g., that 60% of Americans consider location data and phone number history—both available in mobile phone data—as “private”.

Historically and legally, the balance between the societal value of statistical data (in aggregate) and the protection of privacy of individuals has been achieved through data anonymization. While hundreds of different anonymization algorithms exist, most of them are variations and improvements of the seminal k-anonymity algorithm introduced in 1998. Recent studies have, however, shown that pseudonymization and standard de-identification are not sufficient to prevent users from being re-identified in mobile phone data. Four data points—approximate places and times where an individual was present—have been shown to be enough to uniquely re-identify them 95% of the time in a mobile phone dataset of 1.5 million people. Furthermore, re-identification estimations using unicity—a metric to evaluate the risk of re-identification in large-scale datasets—and attempts at k-anonymizing mobile phone data ruled out de-identification as sufficient to truly anonymize the data. This was echoed in the recent report of the [US] President’s Council of Advisors on Science and Technology on Big Data Privacy which consider de-identification to be useful as an “added safeguard, but [emphasized that] it is not robust against near-term future re-identification methods”.

The limits of the historical de-identification framework to adequately balance risks and benefits in the use of mobile phone data are a major hindrance to their use by researchers, development practitioners, humanitarian workers, and companies. This became particularly clear at the height of the Ebola crisis, when qualified researchers (including some of us) were prevented from accessing relevant mobile phone data on time despite efforts by mobile phone operators, the GSMA, and UN agencies, with privacy being cited as one of the main concerns.

These privacy concerns are, in our opinion, due to the failures of the traditional de-identification model and the lack of a modern and agreed upon framework for the privacy-conscientious use of mobile phone data by third-parties especially in the context of the EU General Data Protection Regulation (GDPR). Such frameworks have been developed for the anonymous use of other sensitive data such as census, household survey, and tax data. The positive societal impact of making these data accessible and the technical means available to protect people’s identity have been considered and a trade-off, albeit far from perfect, has been agreed on and implemented. This has allowed the data to be used in aggregate for the benefit of society. Such thinking and an agreed upon set of models has been missing so far for mobile phone data. This has left data protection authorities, mobile phone operators, and data users with little guidance on technically sound yet reasonable models for the privacy-conscientious use of mobile phone data. This has often resulted in suboptimal tradeoffs if any.

In this paper, we propose four models for the privacy-conscientious use of mobile phone data (Fig. 1). All of these models 1) focus on a use of mobile phone data in which only statistical, aggregate information is ultimately needed by a third-party and, while this needs to be confirmed on a per-country basis, 2) are designed to fall under the legal umbrella of “anonymous use of the data”. Examples of cases in which only statistical aggregated information is ultimately needed by the third-party are discussed below. They would include, e.g., disaster management, mobility analysis, or the training of AI algorithms in which only aggregate information on people’s mobility is ultimately needed by agencies, and exclude cases in which individual-level identifiable information is needed such as targeted advertising or loans based on behavioral data.

Figure 1
Figure 1: Matrix of the four models for the privacy-conscientious use of mobile phone data.

First, it is important to insist that none of these models is a silver bullet…(More)”.

Distributed, privacy-enhancing technologies in the 2017 Catalan referendum on independence: New tactics and models of participatory democracy


M. Poblet at First Monday: “This paper examines new civic engagement practices unfolding during the 2017 referendum on independence in Catalonia. These practices constitute one of the first signs of some emerging trends in the use of the Internet for civic and political action: the adoption of horizontal, distributed, and privacy-enhancing technologies that rely on P2P networks and advanced cryptographic tools. In this regard, the case of the 2017 Catalan referendum, framed within conflicting political dynamics, can be considered a first-of-its kind in participatory democracy. The case also offers an opportunity to reflect on an interesting paradox that twenty-first century activism will face: the more it will rely on private-friendly, secured, and encrypted networks, the more open, inclusive, ethical, and transparent it will need to be….(More)”.

Ethical Dilemmas in Cyberspace


Paper by Martha Finnemore: “This essay steps back from the more detailed regulatory discussions in other contributions to this roundtable on “Competing Visions for Cyberspace” and highlights three broad issues that raise ethical concerns about our activity online. First, the commodification of people—their identities, their data, their privacy—that lies at the heart of business models of many of the largest information and communication technologies companies risks instrumentalizing human beings. Second, concentrations of wealth and market power online may be contributing to economic inequalities and other forms of domination. Third, long-standing tensions between the security of states and the human security of people in those states have not been at all resolved online and deserve attention….(More)”.

Towards matching user mobility traces in large-scale datasets


Paper by Daniel Kondor, Behrooz Hashemian,  Yves-Alexandre de Montjoye and Carlo Ratti: “The problem of unicity and reidentifiability of records in large-scale databases has been studied in different contexts and approaches, with focus on preserving privacy or matching records from different data sources. With an increasing number of service providers nowadays routinely collecting location traces of their users on unprecedented scales, there is a pronounced interest in the possibility of matching records and datasets based on spatial trajectories. Extending previous work on reidentifiability of spatial data and trajectory matching, we present the first large-scale analysis of user matchability in real mobility datasets on realistic scales, i.e. among two datasets that consist of several million people’s mobility traces, coming from a mobile network operator and transportation smart card usage. We extract the relevant statistical properties which influence the matching process and analyze their impact on the matchability of users. We show that for individuals with typical activity in the transportation system (those making 3-4 trips per day on average), a matching algorithm based on the co-occurrence of their activities is expected to achieve a 16.8% success only after a one-week long observation of their mobility traces, and over 55% after four weeks. We show that the main determinant of matchability is the expected number of co-occurring records in the two datasets. Finally, we discuss different scenarios in terms of data collection frequency and give estimates of matchability over time. We show that with higher frequency data collection becoming more common, we can expect much higher success rates in even shorter intervals….(More)”.

Cybersecurity of the Person


Paper by Jeff Kosseff: “U.S. cybersecurity law is largely an outgrowth of the early-aughts concerns over identity theft and financial fraud. Cybersecurity laws focus on protecting identifiers such as driver’s licenses and social security numbers, and financial data such as credit card numbers. Federal and state laws require companies to protect this data and notify individuals when it is breached, and impose civil and criminal liability on hackers who steal or damage this data. In this paper, I argue that our current cybersecurity laws are too narrowly focused on financial harms. While such concerns remain valid, they are only one part of the cybersecurity challenge that our nation faces.

Too often overlooked by the cybersecurity profession are the harms to individuals, such as revenge pornography and online harassment. Our legal system typically addresses these harms through retrospective criminal prosecution and civil litigation, both of which face significant limits. Accounting for such harms in our conception of cybersecurity will help to better align our laws with these threats and reduce the likelihood of the harms occurring….(More)”,

Using insights from behavioral economics to nudge individuals towards healthier choices when eating out


Paper by Stéphane Bergeron, Maurice Doyon, Laure Saulais and JoAnne Labrecque: “Using a controlled experiment in a restaurant with naturally occurring clients, this study investigates how nudging can be used to design menus that guide consumers to make healthier choices. It examines the use of default options, focusing specifically on two types of defaults that can be found when ordering food in a restaurant: automatic and standard defaults. Both types of defaults significantly affected choices, but did not adversely impact the satisfaction of individual choices. The results suggest that menu design could effectively use non-informational strategies such as nudging to promote healthier individual choices without restricting the offer or reducing satisfaction….(More)”.