Data-Driven Regulation and Governance in Smart Cities


Chapter by Sofia Ranchordas and Abram Klop in Berlee, V. Mak, E. Tjong Tjin Tai (Eds), Research Handbook on Data Science and Law (Edward Elgar, 2018): “This paper discusses the concept of data-driven regulation and governance in the context of smart cities by describing how these urban centres harness these technologies to collect and process information about citizens, traffic, urban planning or waste production. It describes how several smart cities throughout the world currently employ data science, big data, AI, Internet of Things (‘IoT’), and predictive analytics to improve the efficiency of their services and decision-making.

Furthermore, this paper analyses the legal challenges of employing these technologies to influence or determine the content of local regulation and governance. It explores in particular three specific challenges: the disconnect between traditional administrative law frameworks and data-driven regulation and governance, the effects of the privatization of public services and citizen needs due to the growing outsourcing of smart cities technologies to private companies; and the limited transparency and accountability that characterizes data-driven administrative processes. This paper draws on a review of interdisciplinary literature on smart cities and offers illustrations of data-driven regulation and governance practices from different jurisdictions….(More)”.

Prediction, Judgment and Complexity


NBER Working Paper by Agrawal, Ajay and Gans, Joshua S. and Goldfarb, Avi: “We interpret recent developments in the field of artificial intelligence (AI) as improvements in prediction technology. In this paper, we explore the consequences of improved prediction in decision-making. To do so, we adapt existing models of decision-making under uncertainty to account for the process of determining payoffs. We label this process of determining the payoffs ‘judgment.’ There is a risky action, whose payoff depends on the state, and a safe action with the same payoff in every state. Judgment is costly; for each potential state, it requires thought on what the payoff might be. Prediction and judgment are complements as long as judgment is not too difficult. We show that in complex environments with a large number of potential states, the effect of improvements in prediction on the importance of judgment depend a great deal on whether the improvements in prediction enable automated decision-making. We discuss the implications of improved prediction in the face of complexity for automation, contracts, and firm boundaries….(More)”.

No One Owns Data


Paper by Lothar Determann: “Businesses, policy makers, and scholars are calling for property rights in data. They currently focus particularly on the vast amounts of data generated by connected cars, industrial machines, artificial intelligence, toys and other devices on the Internet of Things (IoT). This data is personal to numerous parties who are associated with a connected device, for example, the driver of a connected car, its owner and passengers, as well as other traffic participants. Manufacturers, dealers, independent providers of auto parts and services, insurance companies, law enforcement agencies and many others are also interested in this data. Various parties are actively staking their claims to data on the Internet of Things, as they are mining data, the fuel of the digital economy.

Stakeholders in digital markets often frame claims, negotiations and controversies regarding data access as one of ownership. Businesses regularly assert and demand that they own data. Individual data subjects also assume that they own data about themselves. Policy makers and scholars focus on how to redistribute ownership rights to data. Yet, upon closer review, it is very questionable whether data is—or should be—subject to any property rights. This article unambiguously answers the question in the negative, both with respect to existing law and future lawmaking, in the United States as in the European Union, jurisdictions with notably divergent attitudes to privacy, property and individual freedoms….

The article begins with a brief review of the current landscape of the Internet of Things notes explosive growth of data pools generated by connected devices, artificial intelligence, big data analytics tools and other information technologies. Part 1 lays the foundation for examining concrete current legal and policy challenges in the remainder of the article. Part 2 supplies conceptual differentiation and definitions with respect to “data” and “information” as the subject of rights and interests. Distinctions and definitional clarity serve as the basis for examining the purposes and reach of existing property laws in Part 3, including real property, personal property and intellectual property laws. Part 4 analyzes the effect of data-related laws that do not grant property rights. Part 5 examines how the interests of the various stakeholders are protected or impaired by the current framework of data-related laws to identify potential gaps that could warrant additional property rights. Part 6 examines policy considerations for and against property rights in data. Part 7 concludes that no one owns data and no one should own data….(More)”.

Quality of life, big data and the power of statistics


Paper by Shivam Gupta in Statistics & Probability Letters: “Quality of life (QoL) is tied to the perception of ‘meaning’. The quest for meaning is central to the human condition, and we are brought in touch with a sense of meaning when we reflect on what we have created, loved, believed in or left as a legacy (Barcaccia, 2013). QoL is associated with multi-dimensional issues and features such as environmental pressure, total water management, total waste management, noise and level of air pollution (Eusuf et al., 2014). A significant amount of data is needed to understand all these dimensions. Such knowledge is necessary to realize the vision of a smart city, which involves the use of data-driven approaches to improve the quality of life of the inhabitants and city infrastructures (Degbelo et al., 2016).

Technologies such as Radio-Frequency Identification (RFID) or the Internet of Things (IoT) are producing a large volume of data. Koh et al. (2015) pointed out that approximately 2.5 quintillion bytes of data are generated every day, and 90 percent of the data in the world has been created in the past two years alone. Managing this large amount of data, and analyzing it efficiently can help making more informed decisions while solving many of the societal challenges (e.g., exposure analysis, disaster preparedness, climate change). As discussed in Goodchild (2016), the attractiveness of big data can be summarized in one word, namely spatial prediction – the prediction of both the where and when.

This article focuses on the 5Vs of big data (volume, velocity, variety, value, veracity). The challenges associated with big data in the context of environmental monitoring at a city level are briefly presented in Section 2. Section 3 discusses the use of statistical methods like Land Use Regression (LUR) and Spatial Simulated Annealing (SSA) as two promising ways of addressing the challenges of big data….(More)”.

Do Academic Journals Favor Researchers from Their Own Institutions?


Yaniv Reingewertz and Carmela Lutmar at Harvard Business Review: “Are academic journals impartial? While many would suggest that academic journals work for the advancement of knowledge and science, we show this is not always the case. In a recent study, we find that two international relations (IR) journals favor articles written by authors who share the journal’s institutional affiliation. We term this phenomenon “academic in-group bias.”

In-group bias is a well-known phenomenon that is widely documented in the psychological literature. People tend to favor their group, whether it is their close family, their hometown, their ethnic group, or any other group affiliation. Before our study, the evidence regarding academic in-group bias was scarce, with only one studyfinding academic in-group bias in law journals. Studies from economics found mixedresults. Our paper provides evidence of academic in-group bias in IR journals, showing that this phenomenon is not specific to law. We also provide tentative evidence which could potentially resolve the conflict in economics, suggesting that these journals might also exhibit in-group bias. In short, we show that academic in-group bias is general in nature, even if not necessarily large in scope….(More)”.

Online Political Microtargeting: Promises and Threats for Democracy


Frederik Zuiderveen Borgesius et al in Utrecht Law Review: “Online political microtargeting involves monitoring people’s online behaviour, and using the collected data, sometimes enriched with other data, to show people-targeted political advertisements. Online political microtargeting is widely used in the US; Europe may not be far behind.

This paper maps microtargeting’s promises and threats to democracy. For example, microtargeting promises to optimise the match between the electorate’s concerns and political campaigns, and to boost campaign engagement and political participation. But online microtargeting could also threaten democracy. For instance, a political party could, misleadingly, present itself as a different one-issue party to different individuals. And data collection for microtargeting raises privacy concerns. We sketch possibilities for policymakers if they seek to regulate online political microtargeting. We discuss which measures would be possible, while complying with the right to freedom of expression under the European Convention on Human Rights….(More)”.

Big data and food retail: Nudging out citizens by creating dependent consumers


Michael Carolan at GeoForum: “The paper takes a critical look at how food retail firms use big data, looking specifically at how these techniques and technologies govern our ability to imagine food worlds. It does this by drawing on two sets of data: (1) interviews with twenty-one individuals who oversaw the use of big data applications in a retail setting and (2) five consumer focus groups composed of individuals who regularly shopped at major food chains along Colorado’s Front Range.

For reasons described below, the “nudge” provides the conceptual entry point for this analysis, as these techniques are typically expressed through big data-driven nudges. The argument begins by describing the nudge concept and how it is used in the context of retail big data. This is followed by a discussion of methods.

The remainder of the paper discusses how big data are used to nudge consumers and the effects of these practices. This analysis is organized around three themes that emerged out of the qualitative data: path dependency, products; path dependency, retail; and path dependency, habitus. The paper concludes connecting these themes through the concept of governance, particularly by way of their ability to, in Foucault’s (2003: 241) words, have “the power to ‘make’ live and ‘let’ die” worlds….(More)”.

The future of statistics and data science


Paper by Sofia C. Olhede and Patrick J. Wolfe in Statistics & Probability Letters: “The Danish physicist Niels Bohr is said to have remarked: “Prediction is very difficult, especially about the future”. Predicting the future of statistics in the era of big data is not so very different from prediction about anything else. Ever since we started to collect data to predict cycles of the moon, seasons, and hence future agriculture yields, humankind has worked to infer information from indirect observations for the purpose of making predictions.

Even while acknowledging the momentous difficulty in making predictions about the future, a few topics stand out clearly as lying at the current and future intersection of statistics and data science. Not all of these topics are of a strictly technical nature, but all have technical repercussions for our field. How might these repercussions shape the still relatively young field of statistics? And what can sound statistical theory and methods bring to our understanding of the foundations of data science? In this article we discuss these issues and explore how new open questions motivated by data science may in turn necessitate new statistical theory and methods now and in the future.

Together, the ubiquity of sensing devices, the low cost of data storage, and the commoditization of computing have led to a volume and variety of modern data sets that would have been unthinkable even a decade ago. We see four important implications for statistics.

First, many modern data sets are related in some way to human behavior. Data might have been collected by interacting with human beings, or personal or private information traceable back to a given set of individuals might have been handled at some stage. Mathematical or theoretical statistics traditionally does not concern itself with the finer points of human behavior, and indeed many of us have only had limited training in the rules and regulations that pertain to data derived from human subjects. Yet inevitably in a data-rich world, our technical developments cannot be divorced from the types of data sets we can collect and analyze, and how we can handle and store them.

Second, the importance of data to our economies and civil societies means that the future of regulation will look not only to protect our privacy, and how we store information about ourselves, but also to include what we are allowed to do with that data. For example, as we collect high-dimensional vectors about many family units across time and space in a given region or country, privacy will be limited by that high-dimensional space, but our wish to control what we do with data will go beyond that….

Third, the growing complexity of algorithms is matched by an increasing variety and complexity of data. Data sets now come in a variety of forms that can be highly unstructured, including images, text, sound, and various other new forms. These different types of observations have to be understood together, resulting in multimodal data, in which a single phenomenon or event is observed through different types of measurement devices. Rather than having one phenomenon corresponding to single scalar values, a much more complex object is typically recorded. This could be a three-dimensional shape, for example in medical imaging, or multiple types of recordings such as functional magnetic resonance imaging and simultaneous electroencephalography in neuroscience. Data science therefore challenges us to describe these more complex structures, modeling them in terms of their intrinsic patterns.

Finally, the types of data sets we now face are far from satisfying the classical statistical assumptions of identically distributed and independent observations. Observations are often “found” or repurposed from other sampling mechanisms, rather than necessarily resulting from designed experiments….

 Our field will either meet these challenges and become increasingly ubiquitous, or risk rapidly becoming irrelevant to the future of data science and artificial intelligence….(More)”.

Who Killed Albert Einstein? From Open Data to Murder Mystery Games


Gabriella A. B. Barros et al at arXiv: “This paper presents a framework for generating adventure games from open data. Focusing on the murder mystery type of adventure games, the generator is able to transform open data from Wikipedia articles, OpenStreetMap and images from Wikimedia Commons into WikiMysteries. Every WikiMystery game revolves around the murder of a person with a Wikipedia article, and populates the game with suspects who must be arrested by the player if guilty of the murder or absolved if innocent. Starting from only one person as the victim, an extensive generative pipeline finds suspects, their alibis, and paths connecting them from open data, transforms open data into cities, buildings, non-player characters, locks and keys and dialog options. The paper describes in detail each generative step, provides a specific playthrough of one WikiMystery where Albert Einstein is murdered, and evaluates the outcomes of games generated for the 100 most influential people of the 20th century….(More)”.

Small Data for Big Impact


Liz Luckett at the Stanford Social Innovation Review: “As an investor in data-driven companies, I’ve been thinking a lot about my grandfather—a baker, a small business owner, and, I now realize, a pioneering data scientist. Without much more than pencil, paper, and extraordinarily deep knowledge of his customers in Washington Heights, Manhattan, he bought, sold, and managed inventory while also managing risk. His community was poor, but his business prospered. This was not because of what we celebrate today as the power and predictive promise of big data, but rather because of what I call small data: nuanced market insights that come through regular and trusted interactions.

Big data takes into account volumes of information from largely electronic sources—such as credit cards, pay stubs, test scores—and segments people into groups. As a result, people participating in the formalized economy benefit from big data. But people who are paid in cash and have no recognized accolades, such as higher education, are left out. Small data captures those insights to address this market failure. My grandfather, for example, had critical customer information he carefully gathered over the years: who could pay now, who needed a few days more, and which tabs to close. If he had access to a big data algorithm, it likely would have told him all his clients were unlikely to repay him, based on the fact that they were low income (vs. high income) and low education level (vs. college degree). Today, I worry that in our enthusiasm for big data and aggregated predictions, we often lose the critical insights we can gain from small data, because we don’t collect it. In the process, we are missing vital opportunities to both make money and create economic empowerment.

We won’t solve this problem of big data by returning to my grandfather’s shop floor. What we need is more and better data—a small data movement to supply vital missing links in marketplaces and supply chains the world over. What are the proxies that allow large companies to discern whom among the low income are good customers in the absence of a shopkeeper? At The Social Entrepreneurs’ Fund (TSEF), we are profitably investing in a new breed of data company: enterprises that are intentionally and responsibly serving low-income communities, and generating new and unique insights about the behavior of individuals in the process. The value of the small data they collect is becoming increasingly useful to other partners, including corporations who are willing to pay for it. It is a kind of dual market opportunity that for the first time makes it economically advantageous for these companies to reach the poor. We are betting on small data to transform opportunities and quality of life for the underserved, tap into markets that were once seen as too risky or too costly to reach, and earn significant returns for investors….(More)”.