Technical Tiers: A New Classification Framework for Global AI Workforce Analysis


Report by Siddhi Pal, Catherine Schneider and Ruggero Marino Lazzaroni: “… introduces a novel three-tiered classification system for global AI talent that addresses significant methodological limitations in existing workforce analyses, by distinguishing between different skill categories within the existing AI talent pool. By distinguishing between non-technical roles (Category 0), technical software development (Category 1), and advanced deep learning specialization (Category 2), our framework enables precise examination of AI workforce dynamics at a pivotal moment in global AI policy.

Through our analysis of a sample of 1.6 million individuals in the AI talent pool across 31 countries, we’ve uncovered clear patterns in technical talent distribution that significantly impact Europe’s AI ambitions. Asian nations hold an advantage in specialized AI expertise, with South Korea (27%), Israel (23%), and Japan (20%) maintaining the highest proportions of Category 2 talent. Within Europe, Poland and Germany stand out as leaders in specialized AI talent. This may be connected to their initiatives to attract tech companies and investments in elite research institutions, though further research is needed to confirm these relationships.

Our data also reveals a shifting landscape of global talent flows. Research shows that countries employing points-based immigration systems attract 1.5 times more high-skilled migrants than those using demand-led approaches. This finding takes on new significance in light of recent geopolitical developments affecting scientific research globally. As restrictive policies and funding cuts create uncertainty for researchers in the United States, one of the big destinations for European AI talent, the way nations position their regulatory environments, scientific freedoms, and research infrastructure will increasingly determine their ability to attract and retain specialized AI talent.

The gender analysis in our study illuminates another dimension of competitive advantage. Contrary to the overall AI talent pool, EU countries lead in female representation in highly technical roles (Category 2), occupying seven of the top ten global rankings. Finland, Czechia, and Italy have the highest proportion of female representation in Category 2 roles globally (39%, 31%, and 28%, respectively). This gender diversity represents not merely a social achievement but a potential strategic asset in AI innovation, particularly as global coalitions increasingly emphasize the importance of diverse perspectives in AI development…(More)”

Integrating Data Governance and Mental Health Equity: Insights from ‘Towards a Set of Universal Data Principles’


Article by Cindy Hansen: “This recent scholarly work, “Towards a Set of Universal Data Principles” by Steve MacFeely et al (2025), delves comprehensively into the expansive landscape of data management and governance. It is noteworthy to acknowledge the intricate processes through which humans collect, manage, and disseminate vast quantities of data. …To truly democratize digital mental healthcare, it’s crucial to empower individuals in their data journey. By focusing on Digital Self-Determination, people can participate in a transformative shift where control over personal data becomes a fundamental right, aligning with the proposed universal data principles. One can envision a world where mental health data, collected and used responsibly, contributes not only to personal well-being but also to the greater public good, echoing the need for data governance to serve society at large.

This concept of digital self-determination empowers individuals by ensuring they have the autonomy to decide who accesses their mental health data and how it’s utilized. Such empowerment is especially significant in the context of mental health, where data sensitivity is high, and privacy is paramount. Giving people the confidence to manage their data fosters trust and encourages them to engage more openly with digital health services, promoting a culture of trust which is a core element of the proposed data governance frameworks.

Holistic Research Canada’s Outcome Monitoring System honors this ethos, allowing individuals to control how their data is accessed, shared, and used while maintaining engagement with healthcare providers. With this system, people can actively participate in their mental health decisions, supported by data that offers transparency about their progress and prognoses, which is crucial in realizing the potential of data to serve both individual and broader societal interests.

Furthermore, this tool provides actionable insights into mental health journeys, promoting evidence-based practices, enhancing transparency, and ensuring that individuals’ rights are safeguarded throughout. These principles are vital to transforming individuals from passive subjects into active stewards of their data, consistent with the proposed principles of safeguarding data quality, integrity, and security…(More)”.

In Uncertain Times, Get Curious


Chapter (and book) by Elizabeth Weingarten: “Questions flow from curiosity. If we want to live and love the questions of our lives—How to live a life of purpose? Who am I in the aftermath of a big change or transition? What kind of person do I want to become as I grow older?—we must first ask them into conscious existence.

Many people have written entire books defining and redefining curiosity. But for me, the most helpful definition comes from a philosophy professor, Perry Zurn, and a systems neuroscientist, Dani Bassett: “For too long—and still too often—curiosity has been oversimplified,” they write, typically “reduced to the simple act of raising a hand or voicing a question, especially from behind a desk or a podium. . . . Scholars generally boil it down to ‘information-seeking’ behavior or a ‘desire to know.’ But curiosity is more than a feeling and certainly more than an act. And curiosity is always more than a single move or a single question.”Curiosity works, they write, by “linking ideas, facts, perceptions, sensations and data points together.”It is complex, mutating, unpredictable, and transformational. It is, fundamentally, an act of connection, an act of creating relationships between ideas and people. Asking questions then, becoming curious, is not just about wanting to find the answer—it is also about our need to connect, with ourselves, with others, with the world.

And this, perhaps, is why our deeper questions are hardly ever satisfied by Google or by fast, easy answers from the people I refer to as the Charlatans of Certainty—the gurus, influencers, and “experts” peddling simple solutions to all the complex problems you face. This is also the reason there is no one-size-fits-all formula for cultivating curiosity—particularly the kind that allows us to live and love our questions, especially the questions that are hard to love, like “How can I live with chronic pain?” or “How do I extricate myself from a challenging relationship?” This kind of curiosity is a special flavor…(More)”. See also: Inquiry as Infrastructure: Defining Good Questions in the Age of Data and AI.

The Overlooked Importance of Data Reuse in AI Infrastructure


Essay by Oxford Insights and The Data Tank: “Employing data stewards and embedding responsible data reuse principles in the programme or ecosystem and within participating organisations is one of the pathways forward. Data stewards are proactive agents responsible for catalysing collaboration, tackling these challenges and embedding data reuse practices in their organisations. 

The role of Chief Data Officer for government agencies has become more common in recent years and we suggest the same needs to happen with the role of the Chief Data Steward. Chief Data Officers are mostly focused on internal data management and have a technical focus. With the changes in the data governance landscape, this profession needs to be reimagined and iterated. Embedded in both the demand and the supply sides of data, data stewards are proactive agents empowered to create public value by re-using data and data expertise. They are tasked to identify opportunities for productive cross-sectoral collaboration, and proactively request or enable functional access to data, insights, and expertise. 

One exception comes from New Zealand. The UN has released a report on the role of data stewards and National Statistical Offices (NSOs) in the new data ecosystem. This report provides many use-cases that can be adopted by governments seeking to establish such a role. In New Zealand, there is an appointed Government Chief Data Steward, who is in charge of setting the strategic direction for government’s data management, and focuses on data reuse altogether. 

Data stewards can play an important role in organisations leading data reuse programmes. Data stewards would be responsible for responding to the challenges with participation introduced above. 

A Data Steward’s role includes attracting participation for data reuse programmes by:

  • Demonstrating and communicating the value proposition of data reuse and collaborations, by engaging in partnerships and steering data reuse and sharing among data commons, cooperatives, or collaborative infrastructures. 
  • Developing responsible data lifecycle governance, and communicating insights to raise awareness and build trust among stakeholders; 

A Data Steward’s role includes maintaining and scaling participation for data reuse programmes by:

  • Maintaining trust by engaging with wider stakeholders and establishing clear engagement methodologies. For example, by embedding a social license, data stewards assure the digital self determination principle is embedded in data reuse processes. 
  • Fostering sustainable partnerships and collaborations around data, via developing business cases for data sharing and reuse, and measuring impact to build the societal case for data collaboration; and
  • Innovating in the sector by turning data to decision intelligence to ensure that insights derived from data are more effectively integrated into decision-making processes…(More)”.

Guiding the provision of quality policy advice: the 5D model


Paper by Christopher Walker and Sally Washington: “… presents a process model to guide the production of quality policy advice. The work draws on engagement with both public sector practitioners and academics to design a process model for the development of policy advice that works in practice (can be used by policy professionals in their day-to-day work) and aligns with theory (can be taught as part of explaining the dynamics of a wider policy advisory system). The 5D Model defines five key domains of inquiry: understanding Demand, being open to Discovery, undertaking Design, identifying critical Decision points, and shaping advice to enable Delivery. Our goal is a ‘repeatable, scalable’ model for supporting policy practitioners to provide quality advice to decision makers. The model was developed and tested through an extensive process of engagement with senior policy practitioners who noted the heuristic gave structure to practices that determine how policy advice is organized and formulated. Academic colleagues confirmed the utility of the model for explaining and teaching how policy is designed and delivered within the context of a wider policy advisory system (PAS). A unique aspect of this work was the collaboration and shared interest amongst academics and practitioners to define a model that is ‘useful for teaching’ and ‘useful for doing’…(More)”.

Brazil’s AI-powered social security app is wrongly rejecting claims


Article by Gabriel Daros: “Brazil’s social security institute, known as INSS, added AI to its app in 2018 in an effort to cut red tape and speed up claims. The office, known for its long lines and wait times, had around 2 million pending requests for everything from doctor’s appointments to sick pay to pensions to retirement benefits at the time. While the AI-powered tool has since helped process thousands of basic claims, it has also rejected requests from hundreds of people like de Brito — who live in remote areas and have little digital literacy — for minor errors.

The government is right to digitize its systems to improve efficiency, but that has come at a cost, Edjane Rodrigues, secretary for social policies at the National Confederation of Workers in Agriculture, told Rest of World.

“If the government adopts this kind of service to speed up benefits for the people, this is good. We are not against it,” she said. But, particularly among farm workers, claims can be complex because of the nature of their work, she said, referring to cases that require additional paperwork, such as when a piece of land is owned by one individual but worked by a group of families. “There are many peculiarities in agriculture, and rural workers are being especially harmed” by the app, according to Rodrigues.

“Each automated decision is based on specified legal criteria, ensuring that the standards set by the social security legislation are respected,” a spokesperson for INSS told Rest of World. “Automation does not work in an arbitrary manner. Instead, it follows clear rules and regulations, mirroring the expected standards applied in conventional analysis.”

Governments across Latin America have been introducing AI to improve their processes. Last year, Argentina began using ChatGPT to draft court rulings, a move that officials said helped cut legal costs and reduce processing times. Costa Rica has partnered with Microsoft to launch an AI tool to optimize tax data collection and check for fraud in digital tax receipts. El Salvador recently set up an AI lab to develop tools for government services.

But while some of these efforts have delivered promising results, experts have raised concerns about the risk of officials with little tech know-how applying these tools with no transparency or workarounds…(More)”.

How to Survive the A.I. Revolution


Essay by John Cassidy: “It isn’t clear where the term “Luddite” originated. Some accounts trace it to Ned Ludd, a textile worker who reportedly smashed a knitting frame in 1779. Others suggest that it may derive from folk memories of King Ludeca, a ninth-century Anglo-Saxon monarch who died in battle. Whatever the source, many machine breakers identified “General Ludd” as their leader. A couple of weeks after the Rawfolds attack, William Horsfall, another mill owner, was shot dead. A letter sent after Horsfall’s assassination—which hailed “the avenging of the death of the two brave youths who fell at the siege of Rawfolds”—began “By Order of General Ludd.”

The British government, at war with Napoleon, regarded the Luddites as Jacobin insurrectionists and responded with brutal suppression. But this reaction stemmed from a fundamental misinterpretation. Far from being revolutionary, Luddism was a defensive response to the industrial capitalism that was threatening skilled workers’ livelihoods. The Luddites weren’t mindless opponents of technology but had a clear logic to their actions—an essentially conservative one. Since they had no political representation—until 1867, the British voting franchise excluded the vast majority—they concluded that violent protest was their only option. “The burning of Factorys or setting fire to the property of People we know is not right, but Starvation forces Nature to do that which he would not,” one Yorkshire cropper wrote. “We have tried every effort to live by Pawning our Cloaths and Chattles, so we are now on the brink for the last struggle.”

As alarm about artificial intelligence has gone global, so has a fascination with the Luddites. The British podcast “The Ned Ludd Radio Hour” describes itself as “your weekly dose of tech skepticism, cynicism, and absurdism.” Kindred themes are explored in the podcast “This Machine Kills,” co-hosted by the social theorist Jathan Sadowski, whose new book, “The Mechanic and the Luddite,” argues that the fetishization of A.I. and other digital technologies obscures their role in disciplining labor and reinforcing a profit-driven system. “Luddites want technology—the future—to work for all of us,” he told the Guardian.The technology journalist Brian Merchant makes a similar case in “Blood in the Machine: The Origins of the Rebellion Against Big Tech” (2023). Blending a vivid account of the original Luddites with an indictment of contemporary tech giants like Amazon and Uber, Merchant portrays the current wave of automation as part of a centuries-long struggle over labor and power. “Working people are staring down entrepreneurs, tech monopolies, and venture capital firms that are hunting for new forms of labor-saving tech—be it AI, robotics, or software automation—to replace them,” Merchant writes. “They are again faced with losing their jobs to the machine.”..(More)”.

Mind the (Language) Gap: Mapping the Challenges of LLM Development in Low-Resource Language Contexts


White Paper by the Stanford Institute for Human-Centered AI (HAI), the Asia Foundation and the University of Pretoria: “…maps the LLM development landscape for low-resource languages, highlighting challenges, trade-offs, and strategies to increase investment; prioritize cross-disciplinary, community-driven development; and ensure fair data ownership…

  • Large language model (LLM) development suffers from a digital divide: Most major LLMs underperform for non-English—and especially low-resource—languages; are not attuned to relevant cultural contexts; and are not accessible in parts of the Global South.
  • Low-resource languages (such as Swahili or Burmese) face two crucial limitations: a scarcity of labeled and unlabeled language data and poor quality data that is not sufficiently representative of the languages and their sociocultural contexts.
  • To bridge these gaps, researchers and developers are exploring different technical approaches to developing LLMs that better perform for and represent low-resource languages but come with different trade-offs:
    • Massively multilingual models, developed primarily by large U.S.-based firms, aim to improve performance for more languages by including a wider range of (100-plus) languages in their training datasets.
    • Regional multilingual models, developed by academics, governments, and nonprofits in the Global South, use smaller training datasets made up of 10-20 low-resource languages to better cater to and represent a smaller group of languages and cultures.
    • Monolingual or monocultural models, developed by a variety of public and private actors, are trained on or fine-tuned for a single low-resource language and thus tailored to perform well for that language…(More)”

Open with care: transparency and data sharing in civically engaged research


Paper by Ankushi Mitra: “Research transparency and data access are considered increasingly important for advancing research credibility, cumulative learning, and discovery. However, debates persist about how to define and achieve these goals across diverse forms of inquiry. This article intervenes in these debates, arguing that the participants and communities with whom scholars work are active stakeholders in science, and thus have a range of rights, interests, and researcher obligations to them in the practice of transparency and openness. Drawing on civically engaged research and related approaches that advocate for subjects of inquiry to more actively shape its process and share in its benefits, I outline a broader vision of research openness not only as a matter of peer scrutiny among scholars or a top-down exercise in compliance, but rather as a space for engaging and maximizing opportunities for all stakeholders in research. Accordingly, this article provides an ethical and practical framework for broadening transparency, accessibility, and data-sharing and benefit-sharing in research. It promotes movement beyond open science to a more inclusive and socially responsive science anchored in a larger ethical commitment: that the pursuit of knowledge be accountable and its benefits made accessible to the citizens and communities who make it possible…(More)”.

Artificial Intelligence and Big Data


Book edited by Frans L. Leeuw and Michael Bamberger: “…explores how Artificial Intelligence (AI) and Big Data contribute to the evaluation of the rule of law (covering legal arrangements, empirical legal research, law and technology, and international law), and social and economic development programs in both industrialized and developing countries. Issues of ethics and bias in the use of AI are also addressed and indicators of the growth of knowledge in the field are discussed.

Interdisciplinary and international in scope, and bringing together leading academics and practitioners from across the globe, the book explores the applications of AI and big data in Rule of Law and development evaluation, identifies differences in the approaches used in the two fields, and how each could learn from the approaches used in the other, as well as differences in the AI-related issues addressed in industrialized nations compared to those addressed in Africa and Asia.

Artificial Intelligence and Big Data is an essential read for researchers, academics and students working in the fields of Rule of Law and Development, and researchers in institutions working on new applications in AI will all benefit from the book’s practical insights…(More)”.