Online consent: how much do we need to know?


Paper by Bartlomiej Chomanski & Lode Lauwaert: “When you visit a website and click a button that says, ‘I agree to these terms’—do you really agree? Many scholars who consider this question (Solove 2013; Barocas & Nissenbaum 2014; Hull 2015; Pascalev 2017; Yeung 2017; Becker 2019; Zuboff 2019; Andreotta et al. 2022; Wolmarans and Vorhoeve 2022) would tend to answer ‘no’—or, at the very least, they would deem your agreement normatively deficient. The reasoning behind that conclusion is in large part driven by the claim that when most people click ‘I agree’ when visiting online websites and platforms, they do not really know what they are agreeing to. Their lack of knowledge about the privacy policy and other terms of the online agreements thus makes their consent problematic in morally salient ways.

We argue that this prevailing view is wrong. Uninformed consent to online terms and conditions (what we will call, for short, ‘online consent’) is less ethically problematic than many scholars suppose. Indeed, we argue that uninformed online consent preceded by the legitimate exercise of the right not to know (RNTK, to be explained below) is prima facie valid and does not appear normatively deficient in other ways, despite being uninformed.

The paper proceeds as follows. In Sect. 2, we make more precise the concept of online consent and summarize the case against it, as presented in the literature. In Sect. 3 we explain the arguments for the RNTK in bioethics and show that analogous reasoning leads to endorsing the RNTK in online contexts. In Sect. 4, we demonstrate that the appeal to the RNTK helps defuse the critics’ arguments against online consent. Section 5 concludes: online consent is valid (with caveats, to be explored in what follows)…(More)”

Data for Better Governance: Building Government Analytics Ecosystems in Latin America and the Caribbean


Report by the Worldbank: “Governments in Latin America and the Caribbean face significant development challenges, including insufficient economic growth, inflation, and institutional weaknesses. Overcoming these issues requires identifying systemic obstacles through data-driven diagnostics and equipping public officials with the skills to implement effective solutions.

Although public administrations in the region often have access to valuable data, they frequently fall short in analyzing it to inform decisions. However, the impact is big. Inefficiencies in procurement, misdirected transfers, and poorly managed human resources result in an estimated waste of 4% of GDP, equivalent to 17% of all public spending. 

The report “Data for Better Governance: Building Government Analytical Ecosystems in Latin America and the Caribbean” outlines a roadmap for developing government analytics, focusing on key enablers such as data infrastructure and analytical capacity, and offers actionable strategies for improvement…(More)”.

An Open Source Python Library for Anonymizing Sensitive Data


Paper by Judith Sáinz-Pardo Díaz & Álvaro López García: “Open science is a fundamental pillar to promote scientific progress and collaboration, based on the principles of open data, open source and open access. However, the requirements for publishing and sharing open data are in many cases difficult to meet in compliance with strict data protection regulations. Consequently, researchers need to rely on proven methods that allow them to anonymize their data without sharing it with third parties. To this end, this paper presents the implementation of a Python library for the anonymization of sensitive tabular data. This framework provides users with a wide range of anonymization methods that can be applied on the given dataset, including the set of identifiers, quasi-identifiers, generalization hierarchies and allowed level of suppression, along with the sensitive attribute and the level of anonymity required. The library has been implemented following best practices for integration and continuous development, as well as the use of workflows to test code coverage based on unit and functional tests…(More)”.

Civic Engagement & Policymaking Toolkit


About: “This toolkit serves as a guide for science centers and museums and other science engagement organizations to thoughtfully identify and implement ways to nurture civic experiences like these across their work or deepen ongoing civic initiatives for meaningful change within their communities…

This toolkit outlines a Community Science Approach, Civic Engagement & Policymaking, where science and technology are factors in collective civic action and policy decisions to meet community goals. It includes:

  • Guidance for your team on how to get started with this work,
  • An overview of what Civic Engagement & Policymaking as a Community Science Approach can entail,
  • Descriptions of four roles your organization can play to authentically engage with communities on civic priorities,
  • Examples of real collaborations between science engagement organizations and their partners that advance community priorities,
  • Tools, guides, and other resources to help you prepare for new civic engagement efforts and/or expand or deepen existing civic engagement efforts…(More)”.

Informality in Policymaking


Book edited by Lindsey Garner-Knapp, Joanna Mason, Tamara Mulherin and E. Lianne Visser: “Public policy actors spend considerable time writing policy, advising politicians, eliciting stakeholder views on policy concerns, and implementing initiatives. Yet, they also ‘hang out’ chatting at coffee machines, discuss developments in the hallway walking from one meeting to another, or wander outside to carparks for a quick word and to avoid prying eyes. Rather than interrogating the rules and procedures which govern how policies are made, this volume asks readers to begin with the informal as a concept and extend this to what people do, how they relate to each other, and how this matters.

Emerging from a desire to enquire into the lived experience of policy professionals, and to conceptualise afresh the informal in the making of public policy, Informality in Policymaking explores how informality manifests in different contexts, spaces, places, and policy arenas, and the implications of this. Including nine empirical chapters, this volume presents studies from around the world and across policy domains spanning the rural and urban, and the local to the supranational. The chapters employ interdisciplinary approaches and integrate creative elements, such as drawings of hand gestures and fieldwork photographs, in conjunction with ethnographic ‘thick descriptions’.

In unveiling the realities of how policy is made, this deeply meaningful and thoughtfully constructed collection argues that the formal is only part of the story of policymaking, and thus only part of the solutions it seeks to create. Informality in Policymaking will be of interest to researchers and policymakers alike…(More)”.

Garden city: A synthetic dataset and sandbox environment for analysis of pre-processing algorithms for GPS human mobility data



Paper by Thomas H. Li, and Francisco Barreras: “Human mobility datasets have seen increasing adoption in the past decade, enabling diverse applications that leverage the high precision of measured trajectories relative to other human mobility datasets. However, there are concerns about whether the high sparsity in some commercial datasets can introduce errors due to lack of robustness in processing algorithms, which could compromise the validity of downstream results. The scarcity of “ground-truth” data makes it particularly challenging to evaluate and calibrate these algorithms. To overcome these limitations and allow for an intermediate form of validation of common processing algorithms, we propose a synthetic trajectory simulator and sandbox environment meant to replicate the features of commercial datasets that could cause errors in such algorithms, and which can be used to compare algorithm outputs with “ground-truth” synthetic trajectories and mobility diaries. Our code is open-source and is publicly available alongside tutorial notebooks and sample datasets generated with it….(More)”

No Escape: The Weaponization of Gender for the Purposes of Digital Transnational Repression


Report by Citizen Lab: “…we examine the rising trend of gender-based digital transnational repression (DTR), which specifically targets women human rights defenders in exile or in the diaspora, using gender-specific digital tactics aimed at silencing and disabling their voices. Our research draws on the lived experiences of 85 women human rights defenders, originating from 24 home countries and residing in 23 host countries, to help us understand how gender and sexuality play a central role in digital transnational repression…(More)”.

AI, huge hacks leave consumers facing a perfect storm of privacy perils


Article by Joseph Menn: “Hackers are using artificial intelligence to mine unprecedented troves of personal information dumped online in the past year, along with unregulated commercial databases, to trick American consumers and even sophisticated professionals into giving up control of bank and corporate accounts.

Armed with sensitive health informationcalling records and hundreds of millions of Social Security numbers, criminals and operatives of countries hostile to the United States are crafting emails, voice calls and texts that purport to come from government officials, co-workers or relatives needing help, or familiar financial organizations trying to protect accounts instead of draining them.

“There is so much data out there that can be used for phishing and password resets that it has reduced overall security for everyone, and artificial intelligence has made it much easier to weaponize,” said Ashkan Soltani, executive director of the California Privacy Protection Agency, the only such state-level agency.

The losses reported to the FBI’s Internet Crime Complaint Center nearly tripled from 2020 to 2023, to $12.5 billion, and a number of sensitive breaches this year have only increased internet insecurity. The recently discovered Chinese government hacks of U.S. telecommunications companies AT&T, Verizon and others, for instance, were deemed so serious that government officials are being told not to discuss sensitive matters on the phone, some of those officials said in interviews. A Russian ransomware gang’s breach of Change Healthcare in February captured data on millions of Americans’ medical conditions and treatments, and in August, a small data broker, National Public Data, acknowledged that it had lost control of hundreds of millions of Social Security numbers and addresses now being sold by hackers.

Meanwhile, the capabilities of artificial intelligence are expanding at breakneck speed. “The risks of a growing surveillance industry are only heightened by AI and other forms of predictive decision-making, which are fueled by the vast datasets that data brokers compile,” U.S. Consumer Financial Protection Bureau Director Rohit Chopra said in September…(More)”.

Generative Agent Simulations of 1,000 People


Paper by Joon Sung Park: “The promise of human behavioral simulation–general-purpose computational agents that replicate human behavior across domains–could enable broad applications in policymaking and social science. We present a novel agent architecture that simulates the attitudes and behaviors of 1,052 real individuals–applying large language models to qualitative interviews about their lives, then measuring how well these agents replicate the attitudes and behaviors of the individuals that they represent. The generative agents replicate participants’ responses on the General Social Survey 85% as accurately as participants replicate their own answers two weeks later, and perform comparably in predicting personality traits and outcomes in experimental replications. Our architecture reduces accuracy biases across racial and ideological groups compared to agents given demographic descriptions. This work provides a foundation for new tools that can help investigate individual and collective behavior…(More)”.

Why ‘open’ AI systems are actually closed, and why this matters


Paper by David Gray Widder, Meredith Whittaker & Sarah Myers West: “This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector…(More)”.