Exploring the Smart City Indexes and the Role of Macro Factors for Measuring Cities Smartness


María Verónica Alderete in Social Indicators Research: “The main objective of this paper is to discuss the key factors involved in the definition of smart city indexes. Although recent literature has explored the smart city subject, it is of concern if macro ICT factors should also be considered for assessing the technological innovation of a city. To achieve this goal, firstly a literature review of smart city is provided. An analysis of the smart city concept together with a theoretical framework based on the knowledge society and the Quintuple Helix innovation model are included. Secondly, the study analyzes some smart city cases in developed and developing countries. Thirdly, it describes, criticizes and compares some well-known smart city indexes. Lastly, the empirical literature is explored to detect if there are studies proposing changes in smart city indexes or methodologies to consider the macro level variables. It results that cities at the top of the indexes rankings are from developed countries. On the other side, most cities at the bottom of the ranking are from developing or not developed countries. As a result, it is addressed that the ICT development of Smart Cities depends both on the cities’ characteristics and features, and on macro-technological factors. Secondly, there is a scarce number of papers in the subject including macro or country factors, and most of them are revisions of the literature or case studies. There is a lack of studies discussing the indexes’ methodologies. This paper provides some guidelines to build one….(More)”.

Stop the Open Data Bus, We Want to Get Off


Paper by Chris Culnane, Benjamin I. P. Rubinstein, and Vanessa Teague: “The subject of this report is the re-identification of individuals in the Myki public transport dataset released as part of the Melbourne Datathon 2018. We demonstrate the ease with which we were able to re-identify ourselves, our co-travellers, and complete strangers; our analysis raises concerns about the nature and granularity of the data released, in particular the ability to identify vulnerable or sensitive groups…..

This work highlights how a large number of passengers could be re-identified in the 2018 Myki data release, with detailed discussion of specific people. The implications of re-identification are potentially serious: ex-partners, one-time acquaintances, or other parties can determine places of home, work, times of travel, co-travelling patterns—presenting risk to vulnerable groups in particular…

In 2018 the Victorian Government released a large passenger centric transport dataset to a data science competition—the 2018 Melbourne Datathon. Access to the data was unrestricted, with a URL provided on the datathon’s website to download the complete dataset from an Amazon S3 Bucket. Over 190 teams continued to analyse the data through the 2 month competition period. The data consisted of touch on and touch off events for the Myki smart card ticketing system used throughout the state of Victoria, Australia. With such data, contestants would be able to apply retrospective analyses on an entire public transport system, explore suitability of predictive models, etc.

The Myki ticketing system is used across Victorian public transport: on trains, buses and trams. The dataset was a longitudinal dataset, consisting of touch on and touch off events from Week 27 in 2015 through to Week 26 in 2018. Each event contained a card identifier (cardId; not the actual card number), the card type, the time of the touch on or off, and various location information, for example a stop ID or route ID, along with other fields which we omit here for brevity. Events could be indexed by the cardId and as such, all the events associated with a single card could be retrieved. There are a total of 15,184,336 cards in the dataset—more than twice the 2018 population of Victoria. It appears that all touch on and off events for metropolitan trains and trams have been included, though other forms of transport such as intercity trains and some buses are absent. In total there are nearly 2 billion touch on and off events in the dataset.

No information was provided as to the de-identification that was performed on the dataset. Our analysis indicates that little to no de-identification took place on the bulk of the data, as will become evident in Section 3. The exception is the cardId, which appears to have been mapped in some way from the Myki Card Number. The exact mapping has not been discovered, although concerns remain as to its security effectiveness….(More)”.

Datafication and accountability in public health


Introduction to a special issue of Social Studies of Science by Klaus Hoeyer, Susanne Bauer, and Martyn Pickersgill: “In recent years and across many nations, public health has become subject to forms of governance that are said to be aimed at establishing accountability. In this introduction to a special issue, From Person to Population and Back: Exploring Accountability in Public Health, we suggest opening up accountability assemblages by asking a series of ostensibly simple questions that inevitably yield complicated answers: What is counted? What counts? And to whom, how and why does it count? Addressing such questions involves staying attentive to the technologies and infrastructures through which data come into being and are made available for multiple political agendas. Through a discussion of public health, accountability and datafication we present three key themes that unite the various papers as well as illustrate their diversity….(More)”.

How does Finland use health and social data for the public benefit?


Karolina Mackiewicz at ICT & Health: “…Better innovation opportunities, quicker access to comprehensive ready-combined data, smoother permit procedures needed for research – those are some of the benefits for society, academia or business announced by the Ministry of Social Affairs and Health of Finland when the Act on the Secondary Use of Health and Social Data was introduced.

It came into force on 1st of May 2019. According to the Finnish Innovation Fund SITRA, which was involved in the development of the legislation and carried out the pilot projects, it’s a ‘groundbreaking’ piece of legislation. It’ not only effectively introduces a one-stop-shop for data but it’s also one of the first, if not the first, implementations of the GDPR (the EU’s General Data Protection Regulation) for the secondary use of data in Europe. 

The aim of the Act is “to facilitate the effective and safe processing and access to the personal social and health data for steering, supervision, research, statistics and development in the health and social sector”. A second objective is to guarantee an individual’s legitimate expectations as well as their rights and freedoms when processing personal data. In other words, the Ministry of Health promises that the Act will help eliminate the administrative burden in access to the data by the researchers and innovative businesses while respecting the privacy of individuals and providing conditions for the ethically sustainable way of using data….(More)”.

Blockchain and the General Data Protection Regulation


Report by the European Directorate-General for Parliamentary Research Services (EPRS): “Blockchain is a much-discussed instrument that, according to some, promises to inaugurate a new era of data storage and code-execution, which could, in turn, stimulate new business models and markets. The precise impact of the technology is, of course, hard to anticipate with certainty, in particular as many remain sceptical of blockchain’s potential impact. In recent times, there has been much discussion in policy circles, academia and the private sector regarding the tension between blockchain and the European Union’s General Data Protection Regulation (GDPR). Indeed, many of the points of tension between blockchain and the GDPR are due to two overarching factors.

First, the GDPR is based on an underlying assumption that in relation to each personal data point there is at least one natural or legal person – the data controller – whom data subjects can address to enforce their rights under EU data protection law. These data controllers must comply with the GDPR’s obligations. Blockchains, however, are distributed databases that often seek to achieve decentralisation by replacing a unitary actor with many different players. The lack of consensus as to how (joint-)controllership ought to be defined hampers the allocation of responsibility and accountability.

Second, the GDPR is based on the assumption that data can be modified or erased where necessary to comply with legal requirements, such as Articles 16 and 17 GDPR. Blockchains, however, render the unilateral modification of data purposefully onerous in order to ensure data integrity and to increase trust in the network. Furthermore, blockchains underline the challenges of adhering to the requirements of data minimisation and purpose limitation in the current form of the data economy.

This study examines the European data protection framework and applies it to blockchain technologies so as to document these tensions. It also highlights the fact that blockchain may help further some of the GDPR’s objectives. Concrete policy options are developed on the basis of this analysis….(More)”

How technology can enable a more sustainable agriculture industry


Matt High at CSO:”…The sector also faces considerable pressure in terms of its transparency, largely driven by shifting consumer preferences for responsibly sourced and environmentally-friendly goods. The UK, for example, has seen shoppers transition away from typical agricultural commodities towards ‘free-from’ or alternative options that combine health, sustainability and quality.

It means that farmers worldwide must work harder and smarter in embedding corporate social responsibility (CSR) practices into their operations. Davis, who through Anthesis delivers financially driven sustainability strategies, strongly believes that sustainability is no longer a choice. “The agricultural sector is intrinsic to a wide range of global systems, societies and economies,” he says, adding that those organisations that do not embed sustainability best practice into their supply chains will face “increasing risk of price volatility, security of supply, commodity shortages, fraud and uncertainty.” To counter this, he urges businesses to develop CSR founded on a core set of principles that enable sustainable practices to be successfully adopted at a pace and scale that mitigates those risks discussed.

Data is proving a particularly useful tool in this regard. Take the Cool Farm Tool, for example, which is a global, free-to-access online greenhouse gas (GHG), water and biodiversity footprint calculator used by farmers in more than 115 countries worldwide to enable effective management of critical on-farm sustainability challenges. Member organisations such as Pepsi, Tesco and Danone aggregate their supply chain data to report total agricultural footprint against key sustainability metrics – outputs from which are used to share knowledge and best practice on carbon and water reductions strategies….(More)”.

Data versus Democracy


Book by  Kris Shaffer: “Human attention is in the highest demand it has ever been. The drastic increase in available information has compelled individuals to find a way to sift through the media that is literally at their fingertips. Content recommendation systems have emerged as the technological solution to this social and informational problem, but they’ve also created a bigger crisis in confirming our biases by showing us only, and exactly, what it predicts we want to see. Data versus Democracy investigates and explores how, in the era of social media, human cognition, algorithmic recommendation systems, and human psychology are all working together to reinforce (and exaggerate) human bias. The dangerous confluence of these factors is driving media narratives, influencing opinions, and possibly changing election results. 


In this book, algorithmic recommendations, clickbait, familiarity bias, propaganda, and other pivotal concepts are analyzed and then expanded upon via fascinating and timely case studies: the 2016 US presidential election, Ferguson, GamerGate, international political movements, and more events that come to affect every one of us. What are the implications of how we engage with information in the digital age? Data versus Democracy explores this topic and an abundance of related crucial questions. We live in a culture vastly different from any that has come before. In a society where engagement is currency, we are the product. Understanding the value of our attention, how organizations operate based on this concept, and how engagement can be used against our best interests is essential in responsibly equipping ourselves against the perils of disinformation….(More)”.

Data Management Law for the 2020s: The Lost Origins and the New Needs


Paper by Przemysław Pałka: “In the data analytics society, each individual’s disclosure of personal information imposes costs on others. This disclosure enables companies, deploying novel forms of data analytics, to infer new knowledge about other people and to use this knowledge to engage in potentially harmful activities. These harms go beyond privacy and include difficult to detect price discrimination, preference manipulation, and even social exclusion. Currently existing, individual-focused, data protection regimes leave law unable to account for these social costs or to manage them. 

This Article suggests a way out, by proposing to re-conceptualize the problem of social costs of data analytics through the new frame of “data management law.” It offers a critical comparison of the two existing models of data governance: the American “notice and choice” approach and the European “personal data protection” regime (currently expressed in the GDPR). Tracing their origin to a single report issued in 1973, the article demonstrates how they developed differently under the influence of different ideologies (market-centered liberalism, and human rights, respectively). It also shows how both ultimately failed at addressing the challenges outlined already forty-five years ago. 

To tackle these challenges, this Article argues for three normative shifts. First, it proposes to go beyond “privacy” and towards “social costs of data management” as the framework for conceptualizing and mitigating negative effects of corporate data usage. Second, it argues to go beyond the individual interests, to account for collective ones, and to replace contracts with regulation as the means of creating norms governing data management. Third, it argues that the nature of the decisions about these norms is political, and so political means, in place of technocratic solutions, need to be employed….(More)”.

The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism


Book by Nick Couldry: “We are told that progress requires human beings to be connected, and that science, medicine and much else that is good demands the kind massive data collection only possible if every thing and person are continuously connected.

But connection, and the continuous surveillance that connection makes possible, usher in an era of neocolonial appropriation. In this new era, social life becomes a direct input to capitalist production, and data – the data collected and processed when we are connected – is the means for this transformation. Hence the need to start counting the costs of connection.

Capturing and processing social data is today handled by an emerging social quantification sector. We are familiar with its leading players, from Acxiom to Equifax, from Facebook to Uber. Together, they ensure the regular and seemingly natural conversion of daily life into a stream of data that can be appropriated for value. This stream is extracted from sensors embedded in bodies and objects, and from the traces left by human interaction online. The result is a new social order based on continuous tracking, and offering unprecedented new opportunities for social discrimination and behavioral influence.  This order has disturbing consequences for freedom, justice and power — indeed, for the quality of human life.

The true violence of this order is best understood through the history of colonialism. But because we assume that colonialism has been replaced by advanced capitalism, we often miss the connection. The concept of data colonialism can thus be used to trace continuities from colonialism’s historic appropriation of territories and material resources to the datafication of everyday life today. While the modes, intensities, scales and contexts of dispossession have changed, the underlying function remains the same: to acquire resources from which economic value can be extracted.

In data colonialism, data is appropriated through a new type of social relation: data relations. We are living through a time when the organization of capital and the configurations of power are changing dramatically because of this contemporary form of social relation. Data colonialism justifies what it does as an advance in scientific knowledge, personalized marketing, or rational management, just as historic colonialism claimed a civilizing mission. Data colonialism is global, dominated by powerful forces in East and West, in the USA and China. The result is a world where, wherever we are connected, we are colonized by data.

Where is data colonialism heading in the long term? Just as historical colonialism paved the way for industrial capitalism, data colonialism is paving the way for a new stage of capitalism whose outlines we only partly see: the capitalization of life without limit. There will be no part of human life, no layer of experience, that is not extractable for economic value. Human life will be there for mining by corporations without reserve as governments look on appreciatively. This process of capitalization will be the foundation for a highly unequal new social arrangement, a social order that is deeply incompatible with human freedom and autonomy.

But resistance is still possible, drawing on past and present decolonial struggles, as well as the on the best of the humanities, philosophy, political economy, information and social science. The goal is to name what is happening and imagine better ways of living together without the exploitation on which today’s models of ‘connection’ are founded….(More)”

This High-Tech Solution to Disaster Response May Be Too Good to Be True


Sheri Fink in The New York Times: “The company called One Concern has all the characteristics of a buzzy and promising Silicon Valley start-up: young founders from Stanford, tens of millions of dollars in venture capital and a board with prominent names.

Its particular niche is disaster response. And it markets a way to use artificial intelligence to address one of the most vexing issues facing emergency responders in disasters: figuring out where people need help in time to save them.

That promise to bring new smarts and resources to an anachronistic field has generated excitement. Arizona, Pennsylvania and the World Bank have entered into contracts with One Concern over the past year. New York City and San Jose, Calif., are in talks with the company. And a Japanese city recently became One Concern’s first overseas client.

But when T.J. McDonald, who works for Seattle’s office of emergency management, reviewed a simulated earthquake on the company’s damage prediction platform, he spotted problems. A popular big-box store was grayed out on the web-based map, meaning there was no analysis of the conditions there, and shoppers and workers who might be in danger would not receive immediate help if rescuers relied on One Concern’s results.

“If that Costco collapses in the middle of the day, there’s going to be a lot of people who are hurt,” he said.

The error? The simulation, the company acknowledged, missed many commercial areas because damage calculations relied largely on residential census data.

One Concern has marketed its products as lifesaving tools for emergency responders after earthquakes, floods and, soon, wildfires. But interviews and documents show the company has often exaggerated its tools’ abilities and has kept outside experts from reviewing its methodology. In addition, some product features are available elsewhere at no charge, and data-hungry insurance companies — whose interests can diverge from those of emergency workers — are among One Concern’s biggest investors and customers.

Some critics even suggest that shortcomings in One Concern’s approach could jeopardize lives….(More)”.