The Deletion Remedy


Paper by Daniel Wilf-Townsend: “A new remedy has emerged in the world of technology governance. Where someone has wrongfully obtained or used data, this remedy requires them to delete not only that data, but also to delete tools such as machine learning models that they have created using the data. Model deletion, also called algorithmic disgorgement or algorithmic destruction, has been increasingly sought in both private litigation and public enforcement actions. As its proponents note, model deletion can improve the regulation of privacy, intellectual property, and artificial intelligence by providing more effective deterrence and better management of ongoing harms

But, this article argues, model deletion has a serious flaw. In its current form, it has the possibility of being a grossly disproportionate penalty. Model deletion requires the destruction of models whose training included illicit data in any degree, with no consideration of how much (or even whether) that data contributed to any wrongful gains or ongoing harms. Model deletion could thereby cause unjust losses in litigation and chill useful technologies.

This article works toward a well-balanced doctrine of model deletion by building on the remedy’s equitable origins. It identifies how traditional considerations in equity—such as a defendant’s knowledge and culpability, the balance of the hardships, and the availability of more tailored alternatives—can be applied in model deletion cases to mitigate problems of disproportionality. By accounting for proportionality, courts and agencies can develop a doctrine of model deletion that takes advantage of its benefits while limiting its potential excesses…(More)”.

China: Autocracy 2.0


Paper by David Y. Yang: “Autocracy 2.0, exemplified by modern China, is economically robust, technologically advanced, globally engaged, and controlled through subtle and sophisticated methods. What defines China’s political economy, and what drives Autocracy 2.0? What is its future direction? I start by discussing two key challenges autocracies face: incentives and information. I then describe Autocracy 1.0’s reliance on fear and repression to address these issues. It makes no credible promises, using coercion for compliance, resulting in a low-information environment. Next, I introduce Autocracy 2.0, highlighting its significant shift in handling commitment and information challenges. China uses economic incentives to align interests with regime survival, fostering support. It employs advanced bureaucratic structures and technology to manage incentives and information, enabling success in a high-information environment. Finally, I explore Autocracy 3.0’s potential. In China, forces might revert to Autocracy 1.0, using technology for state control as growth slows but aspirations stay high. Globally, modern autocracies, led by China, are becoming major geopolitical forces, challenging the liberal democratic order…(More)”.

Harnessing digital footprint data for population health: a discussion on collaboration, challenges and opportunities in the UK


Paper by Romana Burgess et al: “Digital footprint data are inspiring a new era in population health and well-being research. Linking these novel data with other datasets is critical for future research wishing to use these data for the public good. In order to succeed, successful collaboration among industry, academics and policy-makers is vital. Therefore, we discuss the benefits and obstacles for these stakeholder groups in using digital footprint data for research in the UK. We advocate for policy-makers’ inclusion in research efforts, stress the exceptional potential of digital footprint research to impact policy-making and explore the role of industry as data providers, with a focus on shared value, commercial sensitivity, resource requirements and streamlined processes. We underscore the importance of multidisciplinary approaches, consumer trust and ethical considerations in navigating methodological challenges and further call for increased public engagement to enhance societal acceptability. Finally, we discuss how to overcome methodological challenges, such as reproducibility and sharing of learnings, in future collaborations. By adopting a multiperspective approach to outlining the challenges of working with digital footprint data, our contribution helps to ensure that future research can navigate these challenges effectively while remaining reproducible, ethical and impactful…(More)”

What roles can democracy labs play in co-creating democratic innovations for sustainability?


Article by Inês Campos et al: “This perspective essay proposes Democracy Labs as new processes for developing democratic innovations that help tackle complex socio-ecological challenges within an increasingly unequal and polarised society, against the backdrop of democratic backsliding. Next to the current socio-ecological crisis, rapid technological innovations present both opportunities and challenges for democracy and call for democratic innovations. These innovations (e.g., mini-publics, collaborative governance and e-participation) offer alternative mechanisms for democratic participation and new forms of active citizenship, as well as new feedback mechanisms between citizens and traditional institutions of representative democracy. This essay thus introduces Democracy Labs, as citizen-centred processes for co-creating democratic innovations to inspire future transdisciplinary research and practice for a more inclusive and sustainable democracy. The approach is illustrated with examples from a Democracy Lab in Lisbon, reflecting on requirements for recruiting participants, the relevance of combining sensitising, reflection and ideation stages, and the importance of careful communication and facilitation processes guiding participants through co-creation activities…(More)”

Synthetic Data and Social Science Research


Paper by Jordan C. Stanley & Evan S. Totty: “Synthetic microdata – data retaining the structure of original microdata while replacing original values with modeled values for the sake of privacy – presents an opportunity to increase access to useful microdata for data users while meeting the privacy and confidentiality requirements for data providers. Synthetic data could be sufficient for many purposes, but lingering accuracy concerns could be addressed with a validation system through which the data providers run the external researcher’s code on the internal data and share cleared output with the researcher. The U.S. Census Bureau has experience running such systems. In this chapter, we first describe the role of synthetic data within a tiered data access system and the importance of synthetic data accuracy in achieving a viable synthetic data product. Next, we review results from a recent set of empirical analyses we conducted to assess accuracy in the Survey of Income & Program Participation (SIPP) Synthetic Beta (SSB), a Census Bureau product that made linked survey-administrative data publicly available. Given this analysis and our experience working on the SSB project, we conclude with thoughts and questions regarding future implementations of synthetic data with validation…(More)”

Artificial Intelligence as a Catalyzer for Open Government Data Ecosystems: A Typological Theory Approach


Paper by Anthony Simonofski et al: “Artificial Intelligence (AI) within digital government has witnessed growing interest as it can improve governance processes and stimulate citizen engagement. Despite the rise of Generative AI, discussions on AI fusion with Open Government Data (OGD) remain limited to specific implementations and scattered across disciplines. Drawing from the synthesis of the literature through a systematic review, this study examines and structures how AI can enrich OGD initiatives. Employing a typological approach, ideal profiles of AI application within the OGD lifecycle are formalized, capturing varied roles across the portal and ecosystems perspectives. The resulting conceptual framework identifies eight ideal types of AI applications for OGD: AI as Portal Curator, Explorer, Linker, and Monitor, and AI as Ecosystem Data Retriever, Connecter, Value Developer and Engager. This theoretical foundation shows the under-investigation of some types and will inform policymakers, practitioners, and researchers in leveraging AI to cultivate OGD ecosystems…(More)”.

Second-Order Agency


Paper by Cass Sunstein: “Many people prize agency; they want to make their own choices. Many people also prize second-order agency, by which they decide whether and when to exercise first-order agency. First-order agency can be an extraordinary benefit or an immense burden. When it is an extraordinary benefit, people might reject any kind of interference, or might welcome a nudge, or might seek some kind of boost, designed to increase their capacities. When first-order agency is an immense burden, people might also welcome a nudge or might make some kind of delegation (say, to an employer, a doctor, an algorithm, or a regulator). These points suggests that the line between active choosing and paternalism can be illusory. When private or public institutions override people’s desire not to exercise first-order agency, and thus reject people’s exercise of second-order agency, they are behaving paternalistically, through a form of choice-requiring paternalism. Choice-requiring paternalism may compromise second-order agency. It might not be very nice to do that…(More)”.

Data Privacy for Record Linkage and Beyond


Paper by Shurong Lin & Eric Kolaczyk: “In a data-driven world, two prominent research problems are record linkage and data privacy, among others. Record linkage is essential for improving decision-making by integrating information of the same entities from different sources. On the other hand, data privacy research seeks to balance the need to extract accurate insights from data with the imperative to protect the privacy of the entities involved. Inevitably, data privacy issues arise in the context of record linkage. This article identifies two complementary aspects at the intersection of these two fields: (1) how to ensure privacy during record linkage and (2) how to mitigate privacy risks when releasing the analysis results after record linkage. We specifically discuss privacy-preserving record linkage, differentially private regression, and related topics…(More)”.

Hopes over fears: Can democratic deliberation increase positive emotions concerning the future?


Paper by Mikko Leino and Katariina Kulha: “Deliberative mini-publics have often been considered to be a potential way to promote future-oriented thinking. Still, thinking about the future can be hard as it can evoke negative emotions such as stress and anxiety. This article establishes why a more positive outlook towards the future can benefit long-term decision-making. Then, it explores whether and to what extent deliberative mini-publics can facilitate thinking about the future by moderating negative emotions and encouraging positive emotions. We analyzed an online mini-public held in the region of Satakunta, Finland, organized to involve the public in the drafting process of a regional plan extending until the year 2050. In addition to the standard practices related to mini-publics, the Citizens’ Assembly included an imaginary time travel exercise, Future Design, carried out with half of the participants. Our analysis makes use of both survey and qualitative data. We found that democratic deliberation can promote positive emotions, like hopefulness and compassion, and lessen negative emotions, such as fear and confusion, related to the future. There were, however, differences in how emotions developed in the various small groups. Interviews with participants shed further light onto how participants felt during the event and how their sentiments concerning the future changed…(More)”

Utilizing big data without domain knowledge impacts public health decision-making


Paper by Miao Zhang, Salman Rahman, Vishwali Mhasawade and Rumi Chunara: “…New data sources and AI methods for extracting information are increasingly abundant and relevant to decision-making across societal applications. A notable example is street view imagery, available in over 100 countries, and purported to inform built environment interventions (e.g., adding sidewalks) for community health outcomes. However, biases can arise when decision-making does not account for data robustness or relies on spurious correlations. To investigate this risk, we analyzed 2.02 million Google Street View (GSV) images alongside health, demographic, and socioeconomic data from New York City. Findings demonstrate robustness challenges; built environment characteristics inferred from GSV labels at the intracity level often do not align with ground truth. Moreover, as average individual-level behavior of physical inactivity significantly mediates the impact of built environment features by census tract, intervention on features measured by GSV would be misestimated without proper model specification and consideration of this mediation mechanism. Using a causal framework accounting for these mediators, we determined that intervening by improving 10% of samples in the two lowest tertiles of physical inactivity would lead to a 4.17 (95% CI 3.84–4.55) or 17.2 (95% CI 14.4–21.3) times greater decrease in the prevalence of obesity or diabetes, respectively, compared to the same proportional intervention on the number of crosswalks by census tract. This study highlights critical issues of robustness and model specification in using emergent data sources, showing the data may not measure what is intended, and ignoring mediators can result in biased intervention effect estimates…(More)”