Introduction to Special Issue by Alina Wernick, and Anna Artyushina: “While the GDPR and other EU laws seek to mitigate a range of potential harms associated with smart cities, the compliance with and enforceability of these regulations remain an issue. In addition, these proposed regulations do not sufficiently address the collective harms associated with the deployment of biometric technologies and artificial intelligence. Another relevant question is whether the initiatives put forward to secure fundamental human rights in the digital realm account for the issues brought on by the deployment of technologies in city spaces. In this special issue, we employ the smart city notion as a point of connection for interdisciplinary research on the human rights implications of the algorithmic, biometric and smart city technologies and the policy responses to them. The articles included in the special issue analyse the latest European regulations as well as soft law, and the policy frameworks that are currently at work in the regions where the GDPR does not apply…(More)”.
Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models
Paper by Shaolei Ren, Pengfei Li, Jianyi Yang, and Mohammad A. Islam: “The growing carbon footprint of artificial intelligence (AI) models, especially large ones such as GPT-3 and GPT-4, has been undergoing public scrutiny. Unfortunately, however, the equally important and enormous water footprint of AI models has remained under the radar. For example, training GPT-3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough for producing 370 BMW cars or 320 Tesla electric vehicles) and the water consumption would have been tripled if training were done in Microsoft’s Asian data centers, but such information has been kept as a secret. This is extremely concerning, as freshwater scarcity has become one of the most pressing challenges shared by all of us in the wake of the rapidly growing population, depleting water resources, and aging water infrastructures. To respond to the global water challenges, AI models can, and also should, take social responsibility and lead by example by addressing their own water footprint. In this paper, we provide a principled methodology to estimate fine-grained water footprint of AI models, and also discuss the unique spatial-temporal diversities of AI models’ runtime water efficiency. Finally, we highlight the necessity of holistically addressing water footprint along with carbon footprint to enable truly sustainable AI…(More)”.
Slow-governance in smart cities: An empirical study of smart intersection implementation in four US college towns
Paper by Madelyn Rose Sanfilippo and Brett Frischmann: “Cities cannot adopt supposedly smart technological systems and protect human rights without developing appropriate data governance, because technologies are not value-neutral. This paper proposes a deliberative, slow-governance approach to smart tech in cities. Inspired by the Governing Knowledge Commons (GKC) framework and past case studies, we empirically analyse the adoption of smart intersection technologies in four US college towns to evaluate and extend knowledge commons governance approaches to address human rights concerns. Our proposal consists of a set of questions that should guide community decision-making, extending the GKC framework via an incorporation of human-rights impact assessments and a consideration of capabilities approaches to human rights. We argue that such a deliberative, slow-governance approach enables adaptation to local norms and more appropriate community governance of smart tech in cities. By asking and answering key questions throughout smart city planning, procurement, implementation and management processes, cities can respect human rights, interests and expectations…(More)”.
The Rule of Law
Paper by Cass R. Sunstein: “The concept of the rule of law is invoked for purposes that are both numerous and diverse, and that concept is often said to overlap with, or to require, an assortment of other practices and ideals, including democracy, free elections, free markets, property rights, and freedom of speech. It is best to understand the concept in a more specific way, with a commitment to seven principles: (1) clear, general, publicly accessible rules laid down in advance; (2) prospectivity rather than retroactivity; (3) conformity between law on the books and law in the world; (4) hearing rights; (5) some degree of separation between (a) law-making and law enforcement and (b) interpretation of law; (6) no unduly rapid changes in the law; and (7) no contradictions or palpable inconsistency in the law. This account of the rule of law conflicts with those offered by (among many others) Friedrich Hayek and Morton Horwitz, who conflate the idea with other, quite different ideas and practices. Of course it is true that the seven principles can be specified in different ways, broadly compatible with the goal of describing the rule of law as a distinct concept, and some of the seven principles might be understood to be more fundamental than others…(More)”.
No Ground Truth? No Problem: Improving Administrative Data Linking Using Active Learning and a Little Bit of Guile
Paper by Sarah Tahamont et al: “While linking records across large administrative datasets [“big data”] has the potential to revolutionize empirical social science research, many administrative data files do not have common identifiers and are thus not designed to be linked to others. To address this problem, researchers have developed probabilistic record linkage algorithms which use statistical patterns in identifying characteristics to perform linking tasks. Naturally, the accuracy of a candidate linking algorithm can be substantially improved when an algorithm has access to “ground-truth” examples — matches which can be validated using institutional knowledge or auxiliary data. Unfortunately, the cost of obtaining these examples is typically high, often requiring a researcher to manually review pairs of records in order to make an informed judgement about whether they are a match. When a pool of ground-truth information is unavailable, researchers can use “active learning” algorithms for linking, which ask the user to provide ground-truth information for select candidate pairs. In this paper, we investigate the value of providing ground-truth examples via active learning for linking performance. We confirm popular intuition that data linking can be dramatically improved with the availability of ground truth examples. But critically, in many real-world applications, only a relatively small number of tactically-selected ground-truth examples are needed to obtain most of the achievable gains. With a modest investment in ground truth, researchers can approximate the performance of a supervised learning algorithm that has access to a large database of ground truth examples using a readily available off-the-shelf tool…(More)”.
Valuing the U.S. Data Economy Using Machine Learning and Online Job Postings
Paper by J Bayoán Santiago Calderón and Dylan Rassier: “With the recent proliferation of data collection and uses in the digital economy, the understanding and statistical treatment of data stocks and flows is of interest among compilers and users of national economic accounts. In this paper, we measure the value of own-account data stocks and flows for the U.S. business sector by summing the production costs of data-related activities implicit in occupations. Our method augments the traditional sum-of-costs methodology for measuring other own-account intellectual property products in national economic accounts by proxying occupation-level time-use factors using a machine learning model and the text of online job advertisements (Blackburn 2021). In our experimental estimates, we find that annual current-dollar investment in own-account data assets for the U.S. business sector grew from $84 billion in 2002 to $186 billion in 2021, with an average annual growth rate of 4.2 percent. Cumulative current-dollar investment for the period 2002–2021 was $2.6 trillion. In addition to the annual current-dollar investment, we present historical-cost net stocks, real growth rates, and effects on value-added by the industrial sector…(More)”.
Using the future wheel methodology to assess the impact of open science in the transport sector
Paper by Anja Fleten Nielsen et al: “Open Science enhances information sharing and makes scientific results of transport research more transparent and accessible at all levels and to everyone allowing integrity and reproducibility. However, what future impacts will Open Science have on the societal, environmental and economic development within the transport sector? Using the Future Wheel methodology, we conducted a workshop with transport experts from both industry and academia to answer this question. The main findings of this study point in the direction of previous studies in other fields, in terms of increased innovation, increased efficiency, economic savings, more equality, and increased participation of citizens. In addition, we found several potential transport specific impacts: lower emission, faster travel times, improved traffic safety, increased awareness for transport policies, artificial intelligence improving mobility services. Several potential negative outcomes of Open Science were also identified by the expert group: job loss, new types of risks, increased cost, increased conflicts, time delays, increased inequality and increased energy consumption. If we know the negative outcomes it is much easier to put in place strategies that are sustainable for a broader stakeholder group, which also increase the probability of taking advantage of all the positive impacts of Open Science…(More)”
Data in design: How big data and thick data inform design thinking projects
Paper by Marzia Mortati, Stefano Magistretti , Cabirio Cautela, and Claudio Dell’Era: “Scholars and practitioners have recognized that making innovation happen today requires renewed approaches focused on agility, dynamicity, and other organizational capabilities that enable firms to cope with uncertainty and complexity. In turn, the literature has shown that design thinking is a useful methodology to cope with ill-defined and wicked problems. In this study, we address the question of the little-known role of different types of data in innovation projects characterized by ill-defined problems requiring creativity to be solved. Rooted in qualitative observation (thick data) and quantitative analyses (big data), we investigate the role of data in eight design thinking projects dealing with ill-defined and wicked problems. Our findings highlight the practical and theoretical implications of eight practices that differently make use of big and thick data, informing academics and practitioners on how different types of data are utilized in design thinking projects and the related principles and practices…(More)”.
Data Cooperatives as Catalysts for Collaboration, Data Sharing, and the (Trans)Formation of the Digital Commons
Paper by Michael Max Bühler et al: “Network effects, economies of scale, and lock-in-effects increasingly lead to a concentration of digital resources and capabilities, hindering the free and equitable development of digital entrepreneurship (SDG9), new skills, and jobs (SDG8), especially in small communities (SDG11) and their small and medium-sized enterprises (“SMEs”). To ensure the affordability and accessibility of technologies, promote digital entrepreneurship and community well-being (SDG3), and protect digital rights, we propose data cooperatives [1,2] as a vehicle for secure, trusted, and sovereign data exchange [3,4]. In post-pandemic times, community/SME-led cooperatives can play a vital role by ensuring that supply chains to support digital commons are uninterrupted, resilient, and decentralized [5]. Digital commons and data sovereignty provide communities with affordable and easy access to information and the ability to collectively negotiate data-related decisions. Moreover, cooperative commons (a) provide access to the infrastructure that underpins the modern economy, (b) preserve property rights, and (c) ensure that privatization and monopolization do not further erode self-determination, especially in a world increasingly mediated by AI. Thus, governance plays a significant role in accelerating communities’/SMEs’ digital transformation and addressing their challenges. Cooperatives thrive on digital governance and standards such as open trusted Application Programming Interfaces (APIs) that increase the efficiency, technological capabilities, and capacities of participants and, most importantly, integrate, enable, and accelerate the digital transformation of SMEs in the overall process. This policy paper presents and discusses several transformative use cases for cooperative data governance. The use cases demonstrate how platform/data-cooperatives, and their novel value creation can be leveraged to take digital commons and value chains to a new level of collaboration while addressing the most pressing community issues. The proposed framework for a digital federated and sovereign reference architecture will create a blueprint for sustainable development both in the Global South and North…(More)”
The disarming simplicity of wicked problems: The biography of an idea
Paper by Niraj Verma: “The idea of “wicked problems” indicates the intractability and dilemmatic nature of design and planning. At the same time, it also encourages the development of design methods and information systems. So how do designers, technologists, and administrators reconcile and respond to these competing ideas? Using William James’s “psychology of truth,” the paper answers this question by putting wicked problems in intellectual relief. It also suggests that as long as pluralism, diversity, and interdisciplinary thinking are in good currency, the idea of wicked problems will retain its popularity, appeal, and usefulness…(More)”.