From the smart city to urban justice in a digital age


Paper by Marit Rosol & Gwendolyn Blue: “The smart city is the most emblematic contemporary expression of the fusion of urbanism and digital technologies. Critical urban scholars are now increasingly likely to highlight the injustices that are created and exacerbated by emerging smart city initiatives and to diagnose the way that these projects remake urban space and urban policy in unjust ways. Despite this, there has not yet been a comprehensive and systematic analysis of the concept of justice in the smart city literature. To fill this gap and strengthen the smart city critique, we draw on the tripartite approach to justice developed by philosopher Nancy Fraser, which is focused on redistribution, recognition, and representation. We use this framework to outline key themes and identify gaps in existing critiques of the smart city, and to emphasize the importance of transformational approaches to justice that take shifts in governance seriously. In reformulating and expanding the existing critiques of the smart city, we argue for shifting the discussion away from the smart city as such. Rather than searching for an alternative smart city, we argue that critical scholars should focus on broader questions of urban justice in a digital age…(More)”.

Forecasting hospital-level COVID-19 admissions using real-time mobility data


Paper by Brennan Klein et al: “For each of the COVID-19 pandemic waves, hospitals have had to plan for deploying surge capacity and resources to manage large but transient increases in COVID-19 admissions. While a lot of effort has gone into predicting regional trends in COVID-19 cases and hospitalizations, there are far fewer successful tools for creating accurate hospital-level forecasts. At the same time, anonymized phone-collected mobility data proved to correlate well with the number of cases for the first two waves of the pandemic (spring 2020, and fall-winter 2021). In this work, we show how mobility data could bolster hospital-specific COVID-19 admission forecasts for five hospitals in Massachusetts during the initial COVID-19 surge. The high predictive capability of the model was achieved by combining anonymized, aggregated mobile device data about users’ contact patterns, commuting volume, and mobility range with COVID hospitalizations and test-positivity data. We conclude that mobility-informed forecasting models can increase the lead-time of accurate predictions for individual hospitals, giving managers valuable time to strategize how best to allocate resources to manage forthcoming surges…(More)”.

Aligning Artificial Intelligence with Humans through Public Policy



Paper by John Nay and James Daily: “Given that Artificial Intelligence (AI) increasingly permeates our lives, it is critical that we systematically align AI objectives with the goals and values of humans. The human-AI alignment problem stems from the impracticality of explicitly specifying the rewards that AI models should receive for all the actions they could take in all relevant states of the world. One possible solution, then, is to leverage the capabilities of AI models to learn those rewards implicitly from a rich source of data describing human values in a wide range of contexts. The democratic policy-making process produces just such data by developing specific rules, flexible standards, interpretable guidelines, and generalizable precedents that synthesize citizens’ preferences over potential actions taken in many states of the world. Therefore, computationally encoding public policies to make them legible to AI systems should be an important part of a socio-technical approach to the broader human-AI alignment puzzle. Legal scholars are exploring AI, but most research has focused on how AI systems fit within existing law, rather than how AI may understand the law. This Essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks. As a demonstration of the ability of AI to comprehend policy, we provide a case study of an AI system that predicts the relevance of proposed legislation to a given publicly traded company and its likely effect on that company. We believe this represents the “comprehension” phase of AI and policy, but leveraging policy as a key source of human values to align AI requires “understanding” policy. We outline what we believe will be required to move toward that, and two example research projects in that direction. Solving the alignment problem is crucial to ensuring that AI is beneficial both individually (to the person or group deploying the AI) and socially. As AI systems are given increasing responsibility in high-stakes contexts, integrating democratically-determined policy into those systems could align their behavior with human goals in a way that is responsive to a constantly evolving society…(More)”.

The Digital Transformation of Law: Are We Prepared for Artificially Intelligent Legal Practice?


Paper by Larry Bridgesmith and Adel Elmessiry: “We live in an instant access and on-demand world of information sharing. The global pandemic of 2020 accelerated the necessity of remote working and team collaboration. Work teams are exploring and utilizing the remote work platforms required to serve in place of stand-ups common in the agile workplace. Online tools are needed to provide visibility to the status of projects and the accountability necessary to ensure that tasks are completed on time and on budget. Digital transformation of organizational data is now the target of AI projects to provide enterprise transparency and predictive insights into the process of work.

This paper develops the relationship between AI, law, and the digital transformation sweeping every industry sector. There is legitimate concern about the degree to which many nascent issues involving emerging technology oppose human rights and well being. However, lawyers will play a critical role in both the prosecution and defense of these rights. Equally, if not more so, lawyers will also be a vibrant source of insight and guidance for the development of “ethical” AI in a proactive—not simply reactive—way….(More)”.

Algorithmic monoculture and social welfare


Paper by Jon Kleinberg and Manish Raghavan: “As algorithms are increasingly applied to screen applicants for high-stakes decisions in employment, lending, and other domains, concerns have been raised about the effects of algorithmic monoculture, in which many decision-makers all rely on the same algorithm. This concern invokes analogies to agriculture, where a monocultural system runs the risk of severe harm from unexpected shocks. Here, we show that the dangers of algorithmic monoculture run much deeper, in that monocultural convergence on a single algorithm by a group of decision-making agents, even when the algorithm is more accurate for any one agent in isolation, can reduce the overall quality of the decisions being made by the full collection of agents. Unexpected shocks are therefore not needed to expose the risks of monoculture; it can hurt accuracy even under “normal” operations and even for algorithms that are more accurate when used by only a single decision-maker. Our results rely on minimal assumptions and involve the development of a probabilistic framework for analyzing systems that use multiple noisy estimates of a set of alternatives…(More)”.

Impediment of Infodemic on Disaster Policy Efficacy: Insights from Location Big Data


Paper by Xiaobin Shen, Natasha Zhang Foutz, and Beibei Li: “Infodemics impede the efficacy of business and public policies, particularly in disastrous times when high-quality information is in the greatest demand. This research proposes a multi-faceted conceptual framework to characterize an infodemic and then empirically assesses its impact on the core mitigation policy of a latest prominent disaster, the COVID-19 pandemic. Analyzing a half million records of COVID-related news media and social media, as well as .2 billion records of location data, via a multitude of methodologies, including text mining and spatio-temporal analytics, we uncover a number of interesting findings. First, the volume of the COVID information incurs an inverted-U-shaped impact on individuals’ compliance with the lockdown policy. That is, a smaller volume encourages the policy compliance, whereas an overwhelming volume discourages compliance, revealing negative ramifications of excessive information about a disaster. Second, novel information boosts policy compliance, signifying the value of offering original and distinctive, instead of redundant, information to the public during a disaster. Third, misinformation exhibits a U-shaped influence unexplored by the literature, deterring policy compliance until a larger amount surfaces, diminishing informational value, escalating public uncertainty. Overall, these findings demonstrate the power of information technology, such as media analytics and location sensing, in disaster management. They also illuminate the significance of strategic information management during disasters and the imperative need for cohesive efforts across governments, media, technology platforms, and the general public to curb future infodemics…(More)”.

Use of science in public policy: Lessons from the COVID-19 pandemic efforts to ‘Follow the Science’


Paper by Barry Bozeman: “The paper asks: ‘What can we learn from COVID-19 pandemic about effective use of scientific and technical information (STI) in policymaking and how might the lessons be put to use?’ The paper employs the political rhetoric of ‘follow the science’ as a lens for examining contemporary concerns in the use of STI, including (1) ‘Breadth of Science Products’, the necessity of a broader concept of STI that includes by-products science, (2) ‘Science Dynamism’, emphasizing the uncertainty and impeachability of science, (3) ‘STI Urgency’ suggesting that STI use during widespread calamities differs from more routine applications, and (4) ‘Hyper-politicization of Science’, arguing that a step-change in the contentiousness of politics affects uses and misuses of STI. The paper concludes with a discussion, STI Curation, as a possible ingredient to improving effective use. With more attention to credibility and trust of STI and to the institutional legitimacy of curators, it should prove possible to improve the effective use of STI in public policy….(More)”.

Citizen power mobilized to fight against mosquito borne diseases


GigaBlog: “Just out in GigaByte is the latest data release from Mosquito Alert, a citizen science system for investigating and managing disease-carrying mosquitoes, and is part of our WHO-sponsored series on vector borne human diseases. Presenting 13,700 new database records in the Global Biodiversity Information Facility (GBIF) repository, all linked to photographs submitted by citizen volunteers and validated by entomological experts to determine if it provides evidence of the presence of any of the mosquito vectors of top concern in Europe. This is the latest of a new special issue of papers presenting biodiversity data for research on human diseases health, incentivising data sharing to fill important particular species and geographic gaps. As big fans of citizen science (and Mosquito Alert), its great to see this new data showcased in the series.

Vector-borne diseases account for more than 17% of all infectious diseases in humans. There are large gaps in knowledge related to these vectors, and data mobilization campaigns are required to improve data coverage to help research on vector-borne diseases and human health. As part of these efforts, GigaScience Press has partnered with the GBIF; and has been supported by TDR, the Special Programme for Research and Training in Tropical Diseases, hosted at the World Health Organization. Through this we launched this “Vectors of human disease” thematic series. Incentivising the sharing of this extremely important data, Article Processing Charges have been waived to assist with the global call for novel data. This effort has already led to the release of newly digitised location data for over 600,000 vector specimens observed across the Americas and Europe.

While paying credit to such a large number of volunteers, creating such a large public collection of validated mosquito images allows this dataset to be used to train machine-learning models for vector detection and classification. Sharing the data in this novel manner meant the authors of these papers had to set up a new credit system to evaluate contributions from multiple and diverse collaborators, which included university researchers, entomologists, and other non-academics such as independent researchers and citizen scientists. In the GigaByte paper these are acknowledged through collaborative authorship for the Mosquito Alert Digital Entomology Network and the Mosquito Alert Community…(More)”.

Addressing the socioeconomic divide in computational modeling for infectious diseases


Paper by Michele Tizzoni et al: “The COVID-19 pandemic has highlighted how structural social inequities fundamentally shape disease dynamics, yet these concepts are often at the margins of the computational modeling community. Building on recent research studies in the area of digital and computational epidemiology, we provide a set of practical and methodological recommendations to address socioeconomic vulnerabilities in epidemic models…(More)”.

Operationalising AI governance through ethics-based auditing: an industry case study


Paper by Jakob Mökander & Luciano Floridi: “Ethics-based auditing (EBA) is a structured process whereby an entity’s past or present behaviour is assessed for consistency with moral principles or norms. Recently, EBA has attracted much attention as a governance mechanism that may help to bridge the gap between principles and practice in AI ethics. However, important aspects of EBA—such as the feasibility and effectiveness of different auditing procedures—have yet to be substantiated by empirical research. In this article, we address this knowledge gap by providing insights from a longitudinal industry case study. Over 12 months, we observed and analysed the internal activities of AstraZeneca, a biopharmaceutical company, as it prepared for and underwent an ethics-based AI audit. While previous literature concerning EBA has focussed on proposing or analysing evaluation metrics or visualisation techniques, our findings suggest that the main difficulties large multinational organisations face when conducting EBA mirror classical governance challenges. These include ensuring harmonised standards across decentralised organisations, demarcating the scope of the audit, driving internal communication and change management, and measuring actual outcomes. The case study presented in this article contributes to the existing literature by providing a detailed description of the organisational context in which EBA procedures must be integrated to be feasible and effective…(More)”.