Paper by Annette Boaz and Kathryn Oliver: “Articulating the research priorities of government is one mechanism for promoting the production of relevant research to inform policy. This study focuses on the Areas of Research Interest (ARIs) produced and published by government departments in the UK. Through a qualitative study consisting of interviews with 25 researchers, civil servants, intermediaries and research funders, the authors explored the role of ARIs. Using the concept of boundary objects, the paper considers the ways in which ARIs are used and how they are supported by boundary practices and boundary workers, including through engagement opportunities. The paper addresses the following questions: What boundaries do ARIs cross, intended and otherwise? What characteristics of ARIs enable or hinder this boundary-crossing? and What resources, skills, work or conditions are required for this boundary-crossing to work well? We see the ARIs being used as a boundary object across multiple boundaries, with implications for the ways in which the ARIs are crafted and shared. In the application of ARIs in the UK policy context, we see a constant interplay between boundary objects, practices and people all operating within the confines of existing systems and processes. For example, understanding what was meant by a particular ARI sometimes involved ‘decoding’ work as part of the academic-policy engagement process. While ARIs have an important role to play they are no magic bullet. Nor do they tell the whole story of governmental research interests. Optimizing the use of research in policy making requires the galvanisation of a range of mechanisms, including ARIs…(More)”.
A Genealogy of Open
Paper by Betsy Yoon: “The term open has become a familiar part of library and education practice and discourse, with open source software being a common referent. However, the conditions surrounding the emergence of the open source movement are not well understood within librarianship. After identifying capitalism and neoliberalism as structures that shape library and open practice, this article contextualizes the term open by delineating the discursive struggle within the free software movement that led to the emergence of the open source movement. An understanding of the genealogy of open can lend clarity to many of the contradictions that have been grappled with in the literature, such as what open means, whether it supports social justice aims, and its relation to neoliberal and capitalist structures. The article concludes by inquiring into how librarianship and open can reframe practices that are typically oriented toward mitigation and survival to encompass an orientation toward life and flourishing…(More)”.
Future-proofing the city: A human rights-based approach to governing algorithmic, biometric and smart city technologies
Introduction to Special Issue by Alina Wernick, and Anna Artyushina: “While the GDPR and other EU laws seek to mitigate a range of potential harms associated with smart cities, the compliance with and enforceability of these regulations remain an issue. In addition, these proposed regulations do not sufficiently address the collective harms associated with the deployment of biometric technologies and artificial intelligence. Another relevant question is whether the initiatives put forward to secure fundamental human rights in the digital realm account for the issues brought on by the deployment of technologies in city spaces. In this special issue, we employ the smart city notion as a point of connection for interdisciplinary research on the human rights implications of the algorithmic, biometric and smart city technologies and the policy responses to them. The articles included in the special issue analyse the latest European regulations as well as soft law, and the policy frameworks that are currently at work in the regions where the GDPR does not apply…(More)”.
Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models
Paper by Shaolei Ren, Pengfei Li, Jianyi Yang, and Mohammad A. Islam: “The growing carbon footprint of artificial intelligence (AI) models, especially large ones such as GPT-3 and GPT-4, has been undergoing public scrutiny. Unfortunately, however, the equally important and enormous water footprint of AI models has remained under the radar. For example, training GPT-3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough for producing 370 BMW cars or 320 Tesla electric vehicles) and the water consumption would have been tripled if training were done in Microsoft’s Asian data centers, but such information has been kept as a secret. This is extremely concerning, as freshwater scarcity has become one of the most pressing challenges shared by all of us in the wake of the rapidly growing population, depleting water resources, and aging water infrastructures. To respond to the global water challenges, AI models can, and also should, take social responsibility and lead by example by addressing their own water footprint. In this paper, we provide a principled methodology to estimate fine-grained water footprint of AI models, and also discuss the unique spatial-temporal diversities of AI models’ runtime water efficiency. Finally, we highlight the necessity of holistically addressing water footprint along with carbon footprint to enable truly sustainable AI…(More)”.
Slow-governance in smart cities: An empirical study of smart intersection implementation in four US college towns
Paper by Madelyn Rose Sanfilippo and Brett Frischmann: “Cities cannot adopt supposedly smart technological systems and protect human rights without developing appropriate data governance, because technologies are not value-neutral. This paper proposes a deliberative, slow-governance approach to smart tech in cities. Inspired by the Governing Knowledge Commons (GKC) framework and past case studies, we empirically analyse the adoption of smart intersection technologies in four US college towns to evaluate and extend knowledge commons governance approaches to address human rights concerns. Our proposal consists of a set of questions that should guide community decision-making, extending the GKC framework via an incorporation of human-rights impact assessments and a consideration of capabilities approaches to human rights. We argue that such a deliberative, slow-governance approach enables adaptation to local norms and more appropriate community governance of smart tech in cities. By asking and answering key questions throughout smart city planning, procurement, implementation and management processes, cities can respect human rights, interests and expectations…(More)”.
The Rule of Law
Paper by Cass R. Sunstein: “The concept of the rule of law is invoked for purposes that are both numerous and diverse, and that concept is often said to overlap with, or to require, an assortment of other practices and ideals, including democracy, free elections, free markets, property rights, and freedom of speech. It is best to understand the concept in a more specific way, with a commitment to seven principles: (1) clear, general, publicly accessible rules laid down in advance; (2) prospectivity rather than retroactivity; (3) conformity between law on the books and law in the world; (4) hearing rights; (5) some degree of separation between (a) law-making and law enforcement and (b) interpretation of law; (6) no unduly rapid changes in the law; and (7) no contradictions or palpable inconsistency in the law. This account of the rule of law conflicts with those offered by (among many others) Friedrich Hayek and Morton Horwitz, who conflate the idea with other, quite different ideas and practices. Of course it is true that the seven principles can be specified in different ways, broadly compatible with the goal of describing the rule of law as a distinct concept, and some of the seven principles might be understood to be more fundamental than others…(More)”.
No Ground Truth? No Problem: Improving Administrative Data Linking Using Active Learning and a Little Bit of Guile
Paper by Sarah Tahamont et al: “While linking records across large administrative datasets [“big data”] has the potential to revolutionize empirical social science research, many administrative data files do not have common identifiers and are thus not designed to be linked to others. To address this problem, researchers have developed probabilistic record linkage algorithms which use statistical patterns in identifying characteristics to perform linking tasks. Naturally, the accuracy of a candidate linking algorithm can be substantially improved when an algorithm has access to “ground-truth” examples — matches which can be validated using institutional knowledge or auxiliary data. Unfortunately, the cost of obtaining these examples is typically high, often requiring a researcher to manually review pairs of records in order to make an informed judgement about whether they are a match. When a pool of ground-truth information is unavailable, researchers can use “active learning” algorithms for linking, which ask the user to provide ground-truth information for select candidate pairs. In this paper, we investigate the value of providing ground-truth examples via active learning for linking performance. We confirm popular intuition that data linking can be dramatically improved with the availability of ground truth examples. But critically, in many real-world applications, only a relatively small number of tactically-selected ground-truth examples are needed to obtain most of the achievable gains. With a modest investment in ground truth, researchers can approximate the performance of a supervised learning algorithm that has access to a large database of ground truth examples using a readily available off-the-shelf tool…(More)”.
Valuing the U.S. Data Economy Using Machine Learning and Online Job Postings
Paper by J Bayoán Santiago Calderón and Dylan Rassier: “With the recent proliferation of data collection and uses in the digital economy, the understanding and statistical treatment of data stocks and flows is of interest among compilers and users of national economic accounts. In this paper, we measure the value of own-account data stocks and flows for the U.S. business sector by summing the production costs of data-related activities implicit in occupations. Our method augments the traditional sum-of-costs methodology for measuring other own-account intellectual property products in national economic accounts by proxying occupation-level time-use factors using a machine learning model and the text of online job advertisements (Blackburn 2021). In our experimental estimates, we find that annual current-dollar investment in own-account data assets for the U.S. business sector grew from $84 billion in 2002 to $186 billion in 2021, with an average annual growth rate of 4.2 percent. Cumulative current-dollar investment for the period 2002–2021 was $2.6 trillion. In addition to the annual current-dollar investment, we present historical-cost net stocks, real growth rates, and effects on value-added by the industrial sector…(More)”.
Using the future wheel methodology to assess the impact of open science in the transport sector
Paper by Anja Fleten Nielsen et al: “Open Science enhances information sharing and makes scientific results of transport research more transparent and accessible at all levels and to everyone allowing integrity and reproducibility. However, what future impacts will Open Science have on the societal, environmental and economic development within the transport sector? Using the Future Wheel methodology, we conducted a workshop with transport experts from both industry and academia to answer this question. The main findings of this study point in the direction of previous studies in other fields, in terms of increased innovation, increased efficiency, economic savings, more equality, and increased participation of citizens. In addition, we found several potential transport specific impacts: lower emission, faster travel times, improved traffic safety, increased awareness for transport policies, artificial intelligence improving mobility services. Several potential negative outcomes of Open Science were also identified by the expert group: job loss, new types of risks, increased cost, increased conflicts, time delays, increased inequality and increased energy consumption. If we know the negative outcomes it is much easier to put in place strategies that are sustainable for a broader stakeholder group, which also increase the probability of taking advantage of all the positive impacts of Open Science…(More)”
Data in design: How big data and thick data inform design thinking projects
Paper by Marzia Mortati, Stefano Magistretti , Cabirio Cautela, and Claudio Dell’Era: “Scholars and practitioners have recognized that making innovation happen today requires renewed approaches focused on agility, dynamicity, and other organizational capabilities that enable firms to cope with uncertainty and complexity. In turn, the literature has shown that design thinking is a useful methodology to cope with ill-defined and wicked problems. In this study, we address the question of the little-known role of different types of data in innovation projects characterized by ill-defined problems requiring creativity to be solved. Rooted in qualitative observation (thick data) and quantitative analyses (big data), we investigate the role of data in eight design thinking projects dealing with ill-defined and wicked problems. Our findings highlight the practical and theoretical implications of eight practices that differently make use of big and thick data, informing academics and practitioners on how different types of data are utilized in design thinking projects and the related principles and practices…(More)”.