Diversifying Professional Roles in Data Science


Policy Briefing by Emma Karoune and Malvika Sharan: The interdisciplinary nature of the data science workforce extends beyond the traditional notion of a “data scientist.” A successful data science team requires a wide range of technical expertise, domain knowledge and leadership capabilities. To strengthen such a team-based approach, this note recommends that institutions, funders and policymakers invest in developing and professionalising diverse roles, fostering a resilient data science ecosystem for the future. 


By recognising the diverse specialist roles that collaborate within interdisciplinary teams, organisations can leverage deep expertise across multiple skill sets, enhancing responsible decision-making and fostering innovation at all levels. Ultimately, this note seeks to shift the perception of data science professionals from the conventional view of individual data scientists to a competency-based model of specialist roles within a team, each essential to the success of data science initiatives…(More)”.

Future of AI Research


Report by the Association for the Advancement of Artificial Intelligence:  “As AI capabilities evolve rapidly, AI research is also undergoing a fast and significant transformation along many dimensions, including its topics, its methods, the research community, and the working environment. Topics such as AI reasoning and agentic AI have been studied for decades but now have an expanded scope in light of current AI capabilities and limitations. AI ethics and safety, AI for social good, and sustainable AI have become central themes in all major AI conferences. Moreover, research on AI algorithms and software systems is becoming increasingly tied to substantial amounts of dedicated AI hardware, notably GPUs, which leads to AI architecture co-creation, in a way that is more prominent now than over the last 3 decades. Related to this shift, more and more AI researchers work in corporate environments, where the necessary hardware and other resources are more easily available, compared to academia, questioning the roles of academic AI research, student retention, and faculty recruiting. The pervasive use of AI in our daily lives and its impact on people, society, and the environment makes AI a socio-technical field of study, thus highlighting the need for AI researchers to work with experts from other disciplines, such as psychologists, sociologists, philosophers, and economists. The growing focus on emergent AI behaviors rather than on designed and validated properties of AI systems renders principled empirical evaluation more important than ever. Hence the need arises for well-designed benchmarks, test methodologies, and sound processes to infer conclusions from the results of computational experiments. The exponentially increasing quantity of AI research publications and the speed of AI innovation are testing the resilience of the peer-review system, with the immediate release of papers without peer-review evaluation having become widely accepted across many areas of AI research. Legacy and social media increasingly cover AI research advancements, often with contradictory statements that confuse the readers and blur the line between reality and perception of AI capabilities. All this is happening in a geo-political environment, in which companies and countries compete fiercely and globally to lead the AI race. This rivalry may impact access to research results and infrastructure as well as global governance efforts, underscoring the need for international cooperation in AI research and innovation.

In this overwhelming multi-dimensional and very dynamic scenario, it is important to be able to clearly identify the trajectory of AI research in a structured way. Such an effort can define the current trends and the research challenges still ahead of us to make AI more capable and reliable, so we can safely use it in mundane but also, most importantly, in high-stake scenarios.

This study aims to do this by including 17 topics related to AI research, covering most of the transformations mentioned above. Each chapter of the study is devoted to one of these topics, sketching its history, current trends and open challenges…(More)”.

Legitimacy: Working hypotheses


Report by TIAL: “Today more than ever, legitimacy is a vital resource for institutions seeking to lead and sustain impactful change. Yet, it can be elusive.

What does it truly mean for an institution to be legitimate? This publication delves into legitimacy as both a practical asset and a dynamic process, offering institutional entrepreneurs the tools to understand, build, and sustain it over time.

Legitimacy is not a static quality, nor is it purely theoretical. Instead, it’s grounded in the beliefs of those who interact with or are governed by an institution. These beliefs shape whether people view an institution’s authority as rightful and worth supporting. Drawing from social science research and real-world insights, this publication provides a framework to help institutional entrepreneurs address one of the most important challenges of institutional design: ensuring their legitimacy is sufficient to achieve their goals.

The paper emphasizes that legitimacy is relational and contextual. Institutions gain it through three primary sources: outcomes (delivering results), fairness (ensuring just processes), and correct procedures (following accepted norms). However, the need for legitimacy varies depending on the institution’s size, scope, and mission. For example, a body requiring elite approval may need less legitimacy than one relying on mass public trust.

Legitimacy is also dynamic—it ebbs and flows in response to external factors like competition, crises, and shifting societal narratives. Institutional entrepreneurs must anticipate these changes and actively manage their strategies for maintaining legitimacy. This publication highlights actionable steps for doing so, from framing mandates strategically to fostering public trust through transparency and communication.

By treating legitimacy as a resource that evolves over time, institutional entrepreneurs can ensure their institutions remain relevant, trusted, and effective in addressing pressing societal challenges.

Key takeaways

  • Legitimacy is the belief by an audience that an institution’s authority is rightful.
  • Institutions build legitimacy through outcomes, fairness, and correct procedures.
  • The need for legitimacy depends on an institution’s scope and mission.
  • Legitimacy is dynamic and shaped by external factors like crises and competition.
  • A portfolio approach to legitimacy—balancing outcomes, fairness, and procedure—is more resilient.
  • Institutional entrepreneurs must actively manage perceptions and adapt to changing contexts.
  • This publication offers practical frameworks to help institutional entrepreneurs build and sustain legitimacy…(More)”.

The Data Innovation Toolkit


Toolkit by Maria Claudia Bodino, Nathan da Silva Carvalho, Marcelo Cogo, Arianna Dafne Fini Storchi, and Stefaan Verhulst: “Despite the abundance of data, the excitement around AI, and the potential for transformative insights, many public administrations struggle to translate data into actionable strategies and innovations. 

Public servants working with data-related initiatives, need practical, easy-to-use resources designed to enhance the management of data innovation initiatives. 

In order to address these needs, the iLab of DG DIGIT from the European Commission is developing an initial set of practical tools designed to facilitate and enhance the implementation of data-driven initiatives. The main building blocks of the first version of the of the Digital Innovation Toolkit include: 

  1. Repository of educational materials and resources on the latest data innovation approaches from public sector, academia, NGOs and think tanks 
  2. An initial set of practical resources, some examples: 
  3. Workshop Templates to offer structured formats for conducting productive workshops that foster collaboration, ideation, and problem-solving. 
  4. Checklists to ensure that all data journey aspects and steps are properly assessed. 
  5. Interactive Exercises to engage team members in hands-on activities that build skills and facilitate understanding of key concepts and methodologies. 
  6. Canvas Models to provide visual frameworks for planning and brainstorming….(More)”.

How tax data unlocks new insights for industrial policy


OECD article: “Value-added tax (VAT) is a consumption tax applied at each stage of the supply chain whenever value is added to goods or services. Businesses collect and remit VAT. The VAT data that are collected represent a breakthrough in studying production networks because they capture actual transactions between firms at an unprecedented level of detail. Unlike traditional business surveys or administrative data that might tell us about a firm’s size or industry, VAT records show us who does business with whom and for how much.

This data is particularly valuable because of its comprehensive coverage. In Estonia, for example, all VAT-registered businesses must report transactions above €1,000 per month, creating an almost complete picture of significant business relationships in the economy.

At least 15 countries now have such data available, including Belgium, Chile, Costa Rica, Estonia, and Italy. This growing availability creates opportunities for cross-country comparison and broader economic insights…(More)”.

Governing in the Age of AI: Building Britain’s National Data Library


Report by the Tony Blair Institute for Global Change: “The United Kingdom should lead the world in artificial-intelligence-driven innovation, research and data-enabled public services. It has the data, the institutions and the expertise to set the global standard. But without the right infrastructure, these advantages are being wasted.

The UK’s data infrastructure, like that of every nation, is built around outdated assumptions about how data create value. It is fragmented and unfit for purpose. Public-sector data are locked in silos, access is slow and inconsistent, and there is no system to connect and use these data effectively, or any framework for deciding what additional data would be most valuable to collect given AI’s capabilities.

As a result, research is stalled, AI adoption is held back, and the government struggles to plan services, target support and respond to emerging challenges. This affects everything from developing new treatments to improving transport, tackling crime and ensuring economic policies help those who need them. While some countries are making progress in treating existing data as strategic assets, none have truly reimagined data infrastructure for an AI-enabled future…(More)”

The Preventative Shift: How can we embed prevention and achieve long term missions


Paper by Demos (UK): “Over the past two years Demos has been making the case for a fundamental shift in the purpose of government away from firefighting in public services towards preventing problems arriving. First, we set out the case for The Preventative State, to rebuild local, social and civic foundations; then, jointly with The Health Foundation, we made the case to change treasury rules to ringfence funding for prevention. By differentiating between everyday spending, and preventative spending, the government could measure what really matters.

There has been widespread support for this – but also concerns about both the feasibility of measuring preventative spending accurately and appropriately but also that ring-fencing alone may not lead to the desired improvements in outcomes and value for money.

In response we have developed two practical approaches, covered in two papers:

  • Our first paper, Counting What Matters, explores the challenge of measurement and makes a series of recommendations, including the passage of a “Public Investment Act”, to show how this could be appropriately achieved.
  • This second paper, The Preventative Shift, looks at how to shift the culture of public bodies to think ‘prevention first’ and target spending at activities which promise value for money and improve outcomes…(More)”.

Nonprofits, Stop Doing Needs Assessments.


Design for Social Impact: “Too many non-profits and funders still roll into communities with a clipboard and a mission to document everything “missing.”

Needs assessments have become a default tool for diagnosing deficits, reinforcing a saviour mentality where outsiders decide what’s broken and needs fixing.

I’ve sat in meetings where non-profits present lists of what communities lack:

  • “Youth don’t have leadership skills”
  • “Parents don’t value education”
  • “Grassroots organisations don’t have capacity”

The subtext? “They need us.”

And because funding is tied to these narratives of scarcity, organisations learn to describe themselves in the language of need rather than strength—because that’s what gets funded…Now, I’m not saying that organisations or funders should never ask people what their needs are. The key issue is how needs assessments are framed and used. Too often, they use extractive “data” collection methodologies and reinforce top-down, deficit-based narratives, where communities are defined primarily by what they lack rather than what they bring.

Starting with what’s already working (asset mapping) and then identifying what’s needed to strengthen and expand those assets is different from leading with gaps, which can frame communities as passive recipients rather than active problem-solvers.

Arguably, a balanced synergy between assessing needs and asset mapping can be powerful—so long as the process centres on community agency, self-determination, and long-term sustainability rather than diagnosing problems for external intervention.

Also, asset-based mapping to me does not mean that you swoop in with the same clipboard and demand people document their strengths…(More)”.

The Missing Pieces in India’s AI Puzzle: Talent, Data, and R&D


Article by Anirudh Suri: “This paper explores the question of whether India specifically will be able to compete and lead in AI or whether it will remain relegated to a minor role in this global competition. The paper argues that if India is to meet its larger stated ambition of becoming a global leader in AI, it will need to fill significant gaps in at least three areas urgently: talent, data, and research. Putting these three missing pieces in place can help position India extremely well to compete in the global AI race.

India’s national AI mission (NAIM), also known as the IndiaAI Mission, was launched in 2024 and rightly notes that success in the AI race requires multiple pieces of the AI puzzle to be in place.3 Accordingly, it has laid out a plan across seven elements of the “AI stack”: computing/AI infrastructure, data, talent, research and development (R&D), capital, algorithms, and applications.4

However, the focus thus far has practically been on only two elements: ensuring the availability of AI-focused hardware/compute and, to some extent, building Indic language models. India has not paid enough attention to, acted toward, and put significant resources behind three other key enabling elements of AI competitiveness, namely data, talent, and R&D…(More)”.

How Innovation Ecosystems Foster Citizen Participation Using Emerging Technologies in Portugal, Spain and the Netherlands


OECD Report: “This report examines how actors in Portugal, Spain and the Netherlands interact and work together to contribute to the development of emerging technologies for citizen participation. Through in-depth research and analysis of actors’ motivations, experiences, challenges, and enablers in this nascent but promising field, this paper presents a unique cross-national perspective on innovation ecosystems for citizen participation using emerging technology. It includes lessons and concrete proposals for policymakers, innovators, and researchers seeking to develop technology-based citizen participation initiatives…(More)”.