Rebooting the global consensus: Norm entrepreneurship, data governance and the inalienability of digital bodies


Paper by Siddharth Peter de Souza and Linnet Taylor: “The establishment of norms among states is a common way of governing international actions. This article analyses the potential of norm-building for governing data and artificial intelligence technologies’ collective effects. Rather than focusing on state actors’s ability to establish and enforce norms, however, we identify a contrasting process taking place among civil society organisations in response to the international neoliberal consensus on the commodification of data. The norm we identify – ‘nothing about us without us’ – asserts civil society’s agency, and specifically the right of those represented in datasets to give or refuse permission through structures of democratic representation. We argue that this represents a form of norm-building that should be taken as seriously as that of states, and analyse how it is constructing the political power, relations, and resources to engage in governing technology at scale. We first outline how this counter-norming is anchored in data’s connections to bodies, land, community, and labour. We explore the history of formal international norm-making and the current norm-making work being done by civil society organisations internationally, and argue that these, although very different in their configurations and strategies, are comparable in scale and scope. Based on this, we make two assertions: first, that a norm-making lens is a useful way for both civil society and research to frame challenges to the primacy of market logics in law and governance, and second, that the conceptual exclusion of civil society actors as norm-makers is an obstacle to the recognition of counter-power in those spheres…(More)”.

Technical Tiers: A New Classification Framework for Global AI Workforce Analysis


Report by Siddhi Pal, Catherine Schneider and Ruggero Marino Lazzaroni: “… introduces a novel three-tiered classification system for global AI talent that addresses significant methodological limitations in existing workforce analyses, by distinguishing between different skill categories within the existing AI talent pool. By distinguishing between non-technical roles (Category 0), technical software development (Category 1), and advanced deep learning specialization (Category 2), our framework enables precise examination of AI workforce dynamics at a pivotal moment in global AI policy.

Through our analysis of a sample of 1.6 million individuals in the AI talent pool across 31 countries, we’ve uncovered clear patterns in technical talent distribution that significantly impact Europe’s AI ambitions. Asian nations hold an advantage in specialized AI expertise, with South Korea (27%), Israel (23%), and Japan (20%) maintaining the highest proportions of Category 2 talent. Within Europe, Poland and Germany stand out as leaders in specialized AI talent. This may be connected to their initiatives to attract tech companies and investments in elite research institutions, though further research is needed to confirm these relationships.

Our data also reveals a shifting landscape of global talent flows. Research shows that countries employing points-based immigration systems attract 1.5 times more high-skilled migrants than those using demand-led approaches. This finding takes on new significance in light of recent geopolitical developments affecting scientific research globally. As restrictive policies and funding cuts create uncertainty for researchers in the United States, one of the big destinations for European AI talent, the way nations position their regulatory environments, scientific freedoms, and research infrastructure will increasingly determine their ability to attract and retain specialized AI talent.

The gender analysis in our study illuminates another dimension of competitive advantage. Contrary to the overall AI talent pool, EU countries lead in female representation in highly technical roles (Category 2), occupying seven of the top ten global rankings. Finland, Czechia, and Italy have the highest proportion of female representation in Category 2 roles globally (39%, 31%, and 28%, respectively). This gender diversity represents not merely a social achievement but a potential strategic asset in AI innovation, particularly as global coalitions increasingly emphasize the importance of diverse perspectives in AI development…(More)”

Integrating Data Governance and Mental Health Equity: Insights from ‘Towards a Set of Universal Data Principles’


Article by Cindy Hansen: “This recent scholarly work, “Towards a Set of Universal Data Principles” by Steve MacFeely et al (2025), delves comprehensively into the expansive landscape of data management and governance. It is noteworthy to acknowledge the intricate processes through which humans collect, manage, and disseminate vast quantities of data. …To truly democratize digital mental healthcare, it’s crucial to empower individuals in their data journey. By focusing on Digital Self-Determination, people can participate in a transformative shift where control over personal data becomes a fundamental right, aligning with the proposed universal data principles. One can envision a world where mental health data, collected and used responsibly, contributes not only to personal well-being but also to the greater public good, echoing the need for data governance to serve society at large.

This concept of digital self-determination empowers individuals by ensuring they have the autonomy to decide who accesses their mental health data and how it’s utilized. Such empowerment is especially significant in the context of mental health, where data sensitivity is high, and privacy is paramount. Giving people the confidence to manage their data fosters trust and encourages them to engage more openly with digital health services, promoting a culture of trust which is a core element of the proposed data governance frameworks.

Holistic Research Canada’s Outcome Monitoring System honors this ethos, allowing individuals to control how their data is accessed, shared, and used while maintaining engagement with healthcare providers. With this system, people can actively participate in their mental health decisions, supported by data that offers transparency about their progress and prognoses, which is crucial in realizing the potential of data to serve both individual and broader societal interests.

Furthermore, this tool provides actionable insights into mental health journeys, promoting evidence-based practices, enhancing transparency, and ensuring that individuals’ rights are safeguarded throughout. These principles are vital to transforming individuals from passive subjects into active stewards of their data, consistent with the proposed principles of safeguarding data quality, integrity, and security…(More)”.

In Uncertain Times, Get Curious


Chapter (and book) by Elizabeth Weingarten: “Questions flow from curiosity. If we want to live and love the questions of our lives—How to live a life of purpose? Who am I in the aftermath of a big change or transition? What kind of person do I want to become as I grow older?—we must first ask them into conscious existence.

Many people have written entire books defining and redefining curiosity. But for me, the most helpful definition comes from a philosophy professor, Perry Zurn, and a systems neuroscientist, Dani Bassett: “For too long—and still too often—curiosity has been oversimplified,” they write, typically “reduced to the simple act of raising a hand or voicing a question, especially from behind a desk or a podium. . . . Scholars generally boil it down to ‘information-seeking’ behavior or a ‘desire to know.’ But curiosity is more than a feeling and certainly more than an act. And curiosity is always more than a single move or a single question.”Curiosity works, they write, by “linking ideas, facts, perceptions, sensations and data points together.”It is complex, mutating, unpredictable, and transformational. It is, fundamentally, an act of connection, an act of creating relationships between ideas and people. Asking questions then, becoming curious, is not just about wanting to find the answer—it is also about our need to connect, with ourselves, with others, with the world.

And this, perhaps, is why our deeper questions are hardly ever satisfied by Google or by fast, easy answers from the people I refer to as the Charlatans of Certainty—the gurus, influencers, and “experts” peddling simple solutions to all the complex problems you face. This is also the reason there is no one-size-fits-all formula for cultivating curiosity—particularly the kind that allows us to live and love our questions, especially the questions that are hard to love, like “How can I live with chronic pain?” or “How do I extricate myself from a challenging relationship?” This kind of curiosity is a special flavor…(More)”. See also: Inquiry as Infrastructure: Defining Good Questions in the Age of Data and AI.

How to save a billion dollars


Essay by Ann Lewis: “The persistent pattern of billion-dollar technology modernization failures in government stems not from a lack of good intentions, but from structural misalignments in incentives, expertise, and decision-making authority. When large budgets meet urgency, limited in-house technical capacity, and rigid, compliance-driven procurement processes, projects become over-scoped and detached from the needs of users and mission outcomes. This undermines service delivery, wastes taxpayer dollars, and adds unnecessary risk to critical systems supporting national security and public safety.

We know what causes failure, we know what works, and we’ve proven it before. It isn’t easy and shortcuts don’t work — but success is entirely achievable, and that should be the expectation. The solution is not simply to spend more, or cancel contracts, or fire people, but to fundamentally rethink how public institutions build and manage technology, and rethink how public-private partnerships are structured. Government services underpinned by technology should be funded as ongoing capabilities rather than one-time investments, IT procurement processes should embed experienced technical leadership where key decisions are made, and all implementation projects should adopt iterative, outcomes-driven approaches. 

Proven examples—from VA.gov to SSA’s recent CCaaS success—show that when governments align incentives, prioritize real user needs, and invest in internal capacity, they can build services faster, for less money, and with dramatically better results…(More)”.

Mapping local knowledge supports science and stewardship


Paper by Sarah C. Risley, Melissa L. Britsch, Joshua S. Stoll & Heather M. Leslie: “Coastal marine social–ecological systems are experiencing rapid change. Yet, many coastal communities are challenged by incomplete data to inform collaborative research and stewardship. We investigated the role of participatory mapping of local knowledge in addressing these challenges. We used participatory mapping and semi-structured interviews to document local knowledge in two focal social–ecological systems in Maine, USA. By co-producing fine-scale characterizations of coastal marine social–ecological systems, highlighting local questions and needs, and generating locally relevant hypotheses on system change, our research demonstrates how participatory mapping and local knowledge can enhance decision-making capacity in collaborative research and stewardship. The results of this study directly informed a collaborative research project to document changes in multiple shellfish species, shellfish predators, and shellfish harvester behavior and other human activities. This research demonstrates that local knowledge can be a keystone component of collaborative social–ecological systems research and community-lead environmental stewardship…(More)”.

Make privacy policies longer and appoint LLM readers


Paper by Przemysław Pałka et al: “In a world of human-only readers, a trade-off persists between comprehensiveness and comprehensibility: only privacy policies too long to be humanly readable can precisely describe the intended data processing. We argue that this trade-off no longer exists where LLMs are able to extract tailored information from clearly-drafted fully-comprehensive privacy policies. To substantiate this claim, we provide a methodology for drafting comprehensive non-ambiguous privacy policies and for querying them using LLMs prompts. Our methodology is tested with an experiment aimed at determining to what extent GPT-4 and Llama2 are able to answer questions regarding the content of privacy policies designed in the format we propose. We further support this claim by analyzing real privacy policies in the chosen market sectors through two experiments (one with legal experts, and another by using LLMs). Based on the success of our experiments, we submit that data protection law should change: it must require controllers to provide clearly drafted, fully comprehensive privacy policies from which data subjects and other actors can extract the needed information, with the help of LLMs…(More)”.

Mini-Publics and Party Ideology: Who Commissioned the Deliberative Wave in Europe?


Paper by Rodrigo Ramis-Moyano et al: “The increasing implementation of deliberative mini-publics (DMPs) such as Citizens’ Assemblies and Citizens’ Juries led the OECD to identify a ‘deliberative wave’. The burgeoning scholarship on DMPs has increased understanding of how they operate and their impact, but less attention has been paid to the drivers behind this diffusion. Existing research on democratic innovations has underlined the role of the governing party’s ideology as a relevant variable in the study of the adoption of other procedures such as participatory budgeting, placing left-wing parties as a prominent actor in this process. Unlike this previous literature, we have little understanding of whether mini-publics appeal equally across the ideological spectrum. This paper draws on the large-N OECD database to analyse the impact of governing party affiliation on the commissioning of DMPs in Europe across the last four decades. Our analysis finds the ideological pattern of adoption is less clear cut compared to other democratic innovations such as participatory budgeting. But stronger ideological differentiation emerges when we pay close attention to the design features of DMPs implemented…(More)”.

The Weaponization of Expertise


Book by Jacob Hale Russell and Dennis Patterson: “Experts are not infallible. Treating them as such has done us all a grave disservice and, as The Weaponization of Expertise makes painfully clear, given rise to the very populism that all-knowing experts and their elite coterie decry. Jacob Hale Russell and Dennis Patterson use the devastating example of the COVID-19 pandemic to illustrate their case, revealing how the hubris of all-too-human experts undermined—perhaps irreparably—public faith in elite policymaking. Paradoxically, by turning science into dogmatism, the overweening elite response has also proved deeply corrosive to expertise itself—in effect, doing exactly what elite policymakers accuse their critics of doing.

A much-needed corrective to a dangerous blind faith in expertise, The Weaponization of Expertise identifies a cluster of pathologies that have enveloped many institutions meant to help referee expert knowledge, in particular a disavowal of the doubt, uncertainty, and counterarguments that are crucial to the accumulation of knowledge. At a time when trust in expertise and faith in institutions are most needed and most lacking, this work issues a stark reminder that a crisis of misinformation may well begin at the top…(More)”.

Artificial Intelligence: Generative AI’s Environmental and Human Effects


GAO Report: “Generative artificial intelligence (AI) could revolutionize entire industries. In the nearer term, it may dramatically increase productivity and transform daily tasks in many sectors. However, both its benefits and risks, including its environmental and human effects, are unknown or unclear.

Generative AI uses significant energy and water resources, but companies are generally not reporting details of these uses. Most estimates of environmental effects of generative AI technologies have focused on quantifying the energy consumed, and carbon emissions associated with generating that energy, required to train the generative AI model. Estimates of water consumption by generative AI are limited. Generative AI is expected to be a driving force for data center demand, but what portion of data center electricity consumption is related to generative AI is unclear. According to the International Energy Agency, U.S. data center electricity consumption was approximately 4 percent of U.S. electricity demand in 2022 and could be 6 percent of demand in 2026.

While generative AI may bring beneficial effects for people, GAO highlights five risks and challenges that could result in negative human effects on society, culture, and people from generative AI (see figure). For example, unsafe systems may produce outputs that compromise safety, such as inaccurate information, undesirable content, or the enabling of malicious behavior. However, definitive statements about these risks and challenges are difficult to make because generative AI is rapidly evolving, and private developers do not disclose some key technical information.

Selected generative artificial antelligence risks and challenges that could result in human effects

GAO identified policy options to consider that could enhance the benefits or address the challenges of environmental and human effects of generative AI. These policy options identify possible actions by policymakers, which include Congress, federal agencies, state and local governments, academic and research institutions, and industry. In addition, policymakers could choose to maintain the status quo, whereby they would not take additional action beyond current efforts. See below for details on the policy options…(More)”.