Paper by Stefaan G. Verhulst: “We live in an era of datafication, one in which life is increasingly quantified and transformed into intelligence for private or public benefit. When used responsibly, this offers new opportunities for public good. However, three key forms of asymmetry currently limit this potential, especially for already vulnerable and marginalized groups: data asymmetries, information asymmetries, and agency asymmetries. These asymmetries limit human potential, both in a practical and psychological sense, leading to feelings of disempowerment and eroding public trust in technology. Existing methods to limit asymmetries (e.g., consent) as well as some alternatives under consideration (data ownership, collective ownership, personal information management systems) have limitations to adequately address the challenges at hand. A new principle and practice of digital self-determination (DSD) is therefore required.
DSD is based on existing concepts of self-determination, as articulated in sources as varied as Kantian philosophy and the 1966 International Covenant on Economic, Social and Cultural Rights. Updated for the digital age, DSD contains several key characteristics, including the fact that it has both an individual and collective dimension; is designed to especially benefit vulnerable and marginalized groups; and is context-specific (yet also enforceable). Operationalizing DSD in this (and other) contexts so as to maximize the potential of data while limiting its harms requires a number of steps. In particular, a responsible operationalization of DSD would consider four key prongs or categories of action: processes, people and organizations, policies, and products and technologies…(More)”.
Reconceptualizing Democratic Innovation
Paper by Cristina Flesher Fominaya: “Democratic innovation is one way the multiple crises of democracy can be addressed. The literature on democratic innovation has yet to adequately interrogate the role of social movements, and more specifically the movement of democratic imaginaries, in innovation, nor has it considered the specific mechanisms through which movements translate democratic imaginaries and practices into innovation. This article provides a preliminary roadmap for methodological and conceptual innovation in our understanding of the role of social movements in democratic innovation. It introduces the concept of democratic innovation repertoires and argues that: a) we need to broaden our conceptualization and analysis of democratic innovation to encompass the role of social movements; and b) we need to understand how the relationship between democratic movement imaginaries and the praxis that movements develop in their quest to “save” or strengthen democracy can shape democratic innovation beyond movement arenas after mobilizing “events” have passed…(More)”.
Critical Ignoring as a Core Competence for Digital Citizens
Paper by Anastasia Kozyreva, et al: “Low-quality and misleading information online can hijack people’s attention, often by evoking curiosity, outrage, or anger. Resisting certain types of information and actors online requires people to adopt new mental habits that help them avoid being tempted by attention-grabbing and potentially harmful content. We argue that digital information literacy must include the competence of critical ignoring—choosing what to ignore and where to invest one’s limited attentional capacities. We review three types of cognitive strategies for implementing critical ignoring: self-nudging, in which one ignores temptations by removing them from one’s digital environments; lateral reading, in which one vets information by leaving the source and verifying its credibility elsewhere online; and the do-not-feed-the-trolls heuristic, which advises one to not reward malicious actors with attention. We argue that these strategies implementing critical ignoring should be part of school curricula on digital information literacy. Teaching the competence of critical ignoring requires a paradigm shift in educators’ thinking, from a sole focus on the power and promise of paying close attention to an additional emphasis on the power of ignoring. Encouraging students and other online users to embrace critical ignoring can empower them to shield themselves from the excesses, traps, and information disorders of today’s attention economy…(More)”.
The Ethics of Automated Warfare and Artificial Intelligence
Essay series introduced by Bessma Momani, Aaron Shull and Jean-François Bélanger: “…begins with a piece written by Alex Wilner titled “AI and the Future of Deterrence: Promises and Pitfalls.” Wilner looks at the issue of deterrence and provides an account of the various ways AI may impact our understanding and framing of deterrence theory and its practice in the coming decades. He discusses how different countries have expressed diverging views over the degree of AI autonomy that should be permitted in a conflict situation — as those more willing to cut humans out of the decision-making loop could gain a strategic advantage. Wilner’s essay emphasizes that differences in states’ technological capability are large, and this will hinder interoperability among allies, while diverging views on regulation and ethical standards make global governance efforts even more challenging.
Looking to the future of non-state use of drones as an example, the weapon technology transfer from nation-state to non-state actors can help us to understand how next-generation technologies may also slip into the hands of unsavoury characters such as terrorists, criminal gangs or militant groups. The effectiveness of Ukrainian drone strikes against the much larger Russian army should serve as a warning to Western militaries, suggests James Rogers in his essay “The Third Drone Age: Visions Out to 2040.” This is a technology that can level the field by asymmetrically advantaging conventionally weaker forces. The increased diffusion of drone technology enhances the likelihood that future wars will also be drone wars, whether these drones are autonomous systems or not. This technology, in the hands of non-state actors, implies future Western missions against, say, insurgent or guerilla forces will be more difficult.
Data is the fuel that powers AI and the broader digital transformation of war. In her essay “Civilian Data in Cyber Conflict: Legal and Geostrategic Considerations,” Eleonore Pauwels discusses how offensive cyber operations are aiming to alter the very data sets of other actors to undermine adversaries — whether through targeting centralized biometric facilities or individuals’ DNA sequence in genomic analysis databases, or injecting fallacious data into satellite imagery used in situational awareness. Drawing on the implications of international humanitarian law, Pauwels argues that adversarial data manipulation constitutes another form of “grey zone” operation that falls below a threshold of armed conflict. She evaluates the challenges associated with adversarial data manipulation, given that there is no internationally agreed upon definition of what constitutes cyberattacks or cyber hostilities within international humanitarian law (IHL).
In “AI and the Actual International Humanitarian Law Accountability Gap,” Rebecca Crootoff argues that technologies can complicate legal analysis by introducing geographic, temporal and agency distance between a human’s decision and its effects. This makes it more difficult to hold an individual or state accountable for unlawful harmful acts. But in addition to this added complexity surrounding legal accountability, novel military technologies are bringing an existing accountability gap in IHL into sharper focus: the relative lack of legal accountability for unintended civilian harm. These unintentional acts can be catastrophic, but technically within the confines of international law, which highlights the need for new accountability mechanisms to better protect civilians.
Some assert that the deployment of autonomous weapon systems can strengthen compliance with IHL by limiting the kinetic devastation of collateral damage, but AI’s fragility and apparent capacity to behave in unexpected ways poses new and unexpected risks. In “Autonomous Weapons: The False Promise of Civilian Protection,” Branka Marijan opines that AI will likely not surpass human judgment for many decades, if ever, suggesting that there need to be regulations mandating a certain level of human control over weapon systems. The export of weapon systems to states willing to deploy them on a looser chain-of-command leash should be monitored…(More)”.
Govtech against corruption: What are the integrity dividends of government digitalization?
Paper by Carlos Santiso: “Does digitalization reduce corruption? What are the integrity benefits of government digitalization? While the correlation between digitalization and corruption is well established, there is less actionable evidence on the integrity dividends of specific digitalization reforms on different types of corruption and the policy channels through which they operate. These linkages are especially relevant in high corruption risk environments. This article unbundles the integrity dividends of digital reforms undertaken by governments around the world, accelerated by the pandemic. It analyzes the rise of data-driven integrity analytics as promising tools in the anticorruption space deployed by tech-savvy integrity actors. It also assesses the broader integrity benefits of the digitalization of government services and the automation of bureaucratic processes, which contribute to reducing bribe solicitation risks by front-office bureaucrats. It analyzes in particular the impact of digitalization on social transfers. It argues that government digitalization can be an implicit yet effective anticorruption strategy, with subtler yet deeper effects, but there needs to be greater synergies between digital reforms and anticorruption strategies….(More)”.
OECD Good Practice Principles for Public Service Design and Delivery in the Digital Age
OECD Report: “The digital age provides great opportunities to transform how public services are designed and delivered. The OECD Good Practice Principles for Service Design and Delivery in the Digital Age provide a clear, actionable and comprehensive set of objectives for the high-quality digital transformation of public services. Reflecting insights gathered from across OECD member countries, these nine principles are arranged under three pillars of “Build accessible, ethical and equitable public services that prioritise user needs, rather than government needs”; “Deliver with impact, at scale and with pace”; and “Be accountable and transparent in the design and delivery of public services to reinforce and strengthen public trust”. The principles are advisory rather than prescriptive, allowing for local interpretation and implementation. They should also be considered in conjunction with wider OECD work to equip governments in harnessing the potential of digital technology and data to improve outcomes for all…(More)”.
People watching: Abstractions and orthodoxies of monitoring
Paper by Victoria Wang and John V.Tucker: “Our society has an insatiable appetite for data. Much of the data is collected to monitor the activities of people, e.g., for discovering the purchasing behaviour of customers, observing the users of apps, managing the performance of personnel, and conforming to regulations and laws, etc. Although monitoring practices are ubiquitous, monitoring as a general concept has received little analytical attention. We explore: (i) the nature of monitoring facilitated by software; (ii) the structure of monitoring processes; and (iii) the classification of monitoring systems. We propose an abstract definition of monitoring as a theoretical tool to analyse, document, and compare disparate monitoring applications. For us, monitoring is simply the systematic collection of data about the behaviour of people and objects. We then extend this concept with mechanisms for detecting events that require interventions and changes in behaviour, and describe five types of monitoring…(More)”.
How many yottabytes in a quettabyte? Extreme numbers get new names
Article by Elizabeth Gibney: “By the 2030s, the world will generate around a yottabyte of data per year — that’s 1024 bytes, or the amount that would fit on DVDs stacked all the way to Mars. Now, the booming growth of the data sphere has prompted the governors of the metric system to agree on new prefixes beyond that magnitude, to describe the outrageously big and small.
Representatives from governments worldwide, meeting at the General Conference on Weights and Measures (CGPM) outside Paris on 18 November, voted to introduce four new prefixes to the International System of Units (SI) with immediate effect. The prefixes ronna and quetta represent 1027 and 1030, and ronto and quecto signify 10−27 and 10−30. Earth weighs around one ronnagram, and an electron’s mass is about one quectogram.
This is the first update to the prefix system since 1991, when the organization added zetta (1021), zepto (10−21), yotta (1024) and yocto (10−24). In that case, metrologists were adapting to fit the needs of chemists, who wanted a way to express SI units on the scale of Avogadro’s number — the 6 × 1023 units in a mole, a measure of the quantity of substances. The more familiar prefixes peta and exa were added in 1975 (see ‘Extreme figures’).
Extreme figures
Advances in scientific fields have led to increasing need for prefixes to describe very large and very small numbers.
Factor | Name | Symbol | Adopted |
---|---|---|---|
1030 | quetta | Q | 2022 |
1027 | ronna | R | 2022 |
1024 | yotta | Y | 1991 |
1021 | zetta | Z | 1991 |
1018 | exa | E | 1975 |
1015 | peta | P | 1975 |
10−15 | femto | f | 1964 |
10−18 | atto | a | 1964 |
10−21 | zepto | z | 1991 |
10−24 | yocto | y | 1991 |
10−27 | ronto | r | 2022 |
10−30 | quecto | q | 2022 |
Prefixes are agreed at the General Conference on Weights and Measures.
Today, the driver is data science, says Richard Brown, a metrologist at the UK National Physical Laboratory in Teddington. He has been working on plans to introduce the latest prefixes for five years, and presented the proposal to the CGPM on 17 November. With the annual volume of data generated globally having already hit zettabytes, informal suggestions for 1027 — including ‘hella’ and ‘bronto’ — were starting to take hold, he says. Google’s unit converter, for example, already tells users that 1,000 yottabytes is 1 hellabyte, and at least one UK government website quotes brontobyte as the correct term….(More)”
AI Localism in Practice: Examining How Cities Govern AI
Report by Sara Marcucci, Uma Kalkar, and Stefaan Verhulst: “…serves as a primer for policymakers and practitioners to learn about current governance practices and inspire their own work in the field. In this report, we present the fundamentals of AI governance, the value proposition of such initiatives, and their application in cities worldwide to identify themes among city- and state-led governance actions. We close with ten lessons on AI localism for policymakers, data, AI experts, and the informed public to keep in mind as cities grow increasingly ‘smarter’, which include:

- Principles provide a North Star for governance;
- Public engagement provides a social license;
- AI literacy enables meaningful engagement;
- Tap into local expertise;
- Innovate in how transparency is provided;
- Establish new means for accountability and oversight;
- Signal boundaries through binding laws and policies;
- Use procurement to shape responsible AI markets;
- Establish data collaboratives to tackle asymmetries; and
- Make good governance strategic.
Considered together, we look to use our understanding of governance practices, local AI governance examples, and the ten overarching lessons to create an incipient framework for implementing and assessing AI localism initiatives in cities around the world….(More)”
Measuring the environmental impacts of artificial intelligence compute and applications
OECD Paper: “Artificial intelligence (AI) systems can use massive computational resources, raising sustainability concerns. This report aims to improve understanding of the environmental impacts of AI, and help measure and decrease AI’s negative effects while enabling it to accelerate action for the good of the planet. It distinguishes between the direct environmental impacts of developing, using and disposing of AI systems and related equipment, and the indirect costs and benefits of using AI applications. It recommends the establishment of measurement standards, expanding data collection, identifying AI-specific impacts, looking beyond operational energy use and emissions, and improving transparency and equity to help policy makers make AI part of the solution to sustainability challenges…(More)”.