Reimagining Data Governance for AI: Operationalizing Social Licensing for Data Reuse


Report by Stefaan Verhulst, Adam Zable, Andrew J. Zahuranec, and Peter Addo: “…introduces a practical, community-centered framework for governing data reuse in the development and deployment of artificial intelligence systems in low- and middle-income countries (LMICs). As AI increasingly relies on data from LMICs, affected communities are often excluded from decision-making and see little benefit from how their data is used. This report,…reframes data governance through social licensing—a participatory model that empowers communities to collectively define, document, and enforce conditions for how their data is reused. It offers a step-by-step methodology and actionable tools, including a Social Licensing Questionnaire and adaptable contract clauses, alongisde real-world scenarios and recommendations for enforcement, policy integration, and future research. This report recasts data governance as a collective, continuous process – shifting the focus from individual consent to community decision-making…(More)”.

Leading, not lagging: Africa’s gen AI opportunity


Article by Mayowa Kuyoro, Umar Bagus: “The rapid rise of gen AI has captured the world’s imagination and accelerated the integration of AI into the global economy and the lives of people across the world. Gen AI heralds a step change in productivity. As institutions apply AI in novel ways, beyond the advanced analytics and machine learning (ML) applications of the past ten years, the global economy could increase significantly, improving the lives and livelihoods of millions.1

Nowhere is this truer than in Africa, a continent that has already demonstrated its ability to use technology to leapfrog traditional development pathways; for example, mobile technology overcoming the fixed-line internet gap, mobile payments in Kenya, and numerous African institutions making the leap to cloud faster than their peers in developed markets.2 Africa has been quick on the uptake with gen AI, too, with many unique and ingenious applications and deployments well underway…(More)”.

Across McKinsey’s client service work in Africa, many institutions have tested and deployed AI solutions. Our research has found that more than 40 percent of institutions have either started to experiment with gen AI or have already implemented significant solutions (see sidebar “About the research inputs”). However, the continent has so far only scratched the surface of what is possible, with both AI and gen AI. If institutions can address barriers and focus on building for scale, our analysis suggests African economies could unlock up to $100 billion in annual economic value across multiple sectors from gen AI alone. That is in addition to the still-untapped potential from traditional AI and ML in many sectors today—the combined traditional AI and gen AI total is more than double what gen AI can unlock on its own, with traditional AI making up at least 60 percent of the value…(More)”

The Right to AI


Paper by Rashid Mushkani, Hugo Berard, Allison Cohen, Shin Koeski: “This paper proposes a Right to AI, which asserts that individuals and communities should meaningfully participate in the development and governance of the AI systems that shape their lives. Motivated by the increasing deployment of AI in critical domains and inspired by Henri Lefebvre’s concept of the Right to the City, we reconceptualize AI as a societal infrastructure, rather than merely a product of expert design. In this paper, we critically evaluate how generative agents, large-scale data extraction, and diverse cultural values bring new complexities to AI oversight. The paper proposes that grassroots participatory methodologies can mitigate biased outcomes and enhance social responsiveness. It asserts that data is socially produced and should be managed and owned collectively. Drawing on Sherry Arnstein’s Ladder of Citizen Participation and analyzing nine case studies, the paper develops a four-tier model for the Right to AI that situates the current paradigm and envisions an aspirational future. It proposes recommendations for inclusive data ownership, transparent design processes, and stakeholder-driven oversight. We also discuss market-led and state-centric alternatives and argue that participatory approaches offer a better balance between technical efficiency and democratic legitimacy…(More)”.

Societal and technological progress as sewing an ever-growing, ever-changing, patchy, and polychrome quilt


Paper by Joel Z. Leibo et al: “Artificial Intelligence (AI) systems are increasingly placed in positions where their decisions have real consequences, e.g., moderating online spaces, conducting research, and advising on policy. Ensuring they operate in a safe and ethically acceptable fashion is thus critical. However, most solutions have been a form of one-size-fits-all “alignment”. We are worried that such systems, which overlook enduring moral diversity, will spark resistance, erode trust, and destabilize our institutions. This paper traces the underlying problem to an often-unstated Axiom of Rational Convergence: the idea that under ideal conditions, rational agents will converge in the limit of conversation on a single ethics. Treating that premise as both optional and doubtful, we propose what we call the appropriateness framework: an alternative approach grounded in conflict theory, cultural evolution, multi-agent systems, and institutional economics. The appropriateness framework treats persistent disagreement as the normal case and designs for it by applying four principles: (1) contextual grounding, (2) community customization, (3) continual adaptation, and (4) polycentric governance. We argue here that adopting these design principles is a good way to shift the main alignment metaphor from moral unification to a more productive metaphor of conflict management, and that taking this step is both desirable and urgent…(More)”.

The world at our fingertips, just out of reach: the algorithmic age of AI


Article by Soumi Banerjee: “Artificial intelligence (AI) has made global movements, testimonies, and critiques seem just a swipe away. The digital realm, powered by machine learning and algorithmic recommendation systems, offers an abundance of visual, textual, and auditory information. With a few swipes or keystrokes, the unbounded world lies open before us. Yet this ‘openness’ conceals a fundamental paradox: the distinction between availability and accessibility.

What is technically available is not always epistemically accessible. What appears global is often algorithmically curated. And what is served to users under the guise of choice frequently reflects the imperatives of engagement, profit, and emotional resonance over critical understanding or cognitive expansion.

The transformative potential of AI in democratising access to information comes with risks. Algorithmic enclosure and content curation can deepen epistemic inequality, particularly for the youth, whose digital fluency often masks a lack of epistemic literacy. What we need is algorithmic transparency, civic education in media literacy, and inclusive knowledge formats…(More)”.

Building Community-Centered AI Collaborations


Article by Michelle Flores Vryn and Meena Das: “AI can only boost the under-resourced nonprofit world if we design it to serve the communities we care about. But as nonprofits consider how to incorporate AI into their work, many look to expertise from tech sector, expecting tools and implementation advice as well as ethical guidance. Yet when mission-driven entities—with a strong focus on people, communities, and equity—partner solely with tech companies, they may encounter a variety of obstacles, such as:

  1. Limited understanding of community needs: Sector-specific knowledge is essential for aligning AI with nonprofit missions, something many tech companies lack.
  2. Bias in AI models: Without diverse input, AI models may exacerbate biases or misrepresent the communities that nonprofits serve.
  3. Resource constraints: Tech solutions often presume budgets or capacity beyond what nonprofits can bring to bear, creating a reliance on tools that fit the nonprofit context.

We need creative, diverse collaborations across various fields to ensure that technology is deployed in ways that align with nonprofit values, build trust, and serve the greater good. Seeking partners outside of the tech world helps nonprofits develop AI solutions that are context-aware, equitable, and resource-sensitive. Most importantly, nonprofit practitioners must deeply consider our ideal future state: What does an AI-empowered nonprofit sector look like when it truly centers human well-being, community agency, and ethical technology?

Imagining this future means not just reacting to emerging technology but proactively shaping its trajectory. Instead of simply adapting to AI’s capabilities, nonprofits should ask:

  • What problems do we truly need AI to solve?
  • Whose voices must be centered in AI decision-making?
  • How do we ensure AI remains a tool for empowerment rather than control?..(More)”.

Policy Implications of DeepSeek AI’s Talent Base


Brief by Amy Zegart and Emerson Johnston: “Chinese startup DeepSeek’s highly capable R1 and V3 models challenged prevailing beliefs about the United States’ advantage in AI innovation, but public debate focused more on the company’s training data and computing power than human talent. We analyzed data on the 223 authors listed on DeepSeek’s five foundational technical research papers, including information on their research output, citations, and institutional affiliations, to identify notable talent patterns. Nearly all of DeepSeek’s researchers were educated or trained in China, and more than half never left China for schooling or work. Of the quarter or so that did gain some experience in the United States, most returned to China to work on AI development there. These findings challenge the core assumption that the United States holds a natural AI talent lead. Policymakers need to reinvest in competing to attract and retain the world’s best AI talent while bolstering STEM education to maintain competitiveness…(More)”.

Governing in the Age of AI: Reimagining Local Government


Report by the Tony Blair Institute for Global Change: “…The limits of the existing operating model have been reached. Starved of resources by cuts inflicted by previous governments over the past 15 years, many councils are on the verge of bankruptcy even though local taxes are at their highest level. Residents wait too long for care, too long for planning applications and too long for benefits; many people never receive what they are entitled to. Public satisfaction with local services is sliding.

Today, however, there are new tools – enabled by artificial intelligence – that would allow councils to tackle these challenges. The day-to-day tasks of local government, whether related to the delivery of public services or planning for the local area, can all be performed faster, better and cheaper with the use of AI – a true transformation not unlike the one seen a century ago.

These tools would allow councils to overturn an operating model that is bureaucratic, labour-intensive and unresponsive to need. AI could release staff from repetitive tasks and relieve an overburdened and demotivated workforce. It could help citizens navigate the labyrinth of institutions, webpages and forms with greater ease and convenience. It could support councils to make better long-term decisions to drive economic growth, without which the resource pressure will only continue to build…(More)”.

Nonprofit AI: A Comprehensive Guide to Implementing Artificial Intelligence for Social Good


Book by Nathan Chappell and Scott Rosenkrans: “…an insightful and practical overview of how purpose-driven organizations can use AI to increase their impact and advance their missions. The authors offer an all-encompassing guide to understanding the promise and peril of implementing AI in the nonprofit sector, addressing both the theoretical and hands-on aspects of this necessary transformation.

The book provides you with case studies, practical tools, ethical frameworks and templates you can use to address the challenges of AI adoption – including ethical limitations – head-on. It draws on the authors’ thirty years of combined experience in the nonprofit industry to help you equip your nonprofit stakeholders with the knowledge and tools they need to successfully navigate the AI revolution.

You’ll also find:

  • Innovative and proven approaches to responsible and beneficial AI implementation taken by real-world organizations that will inspire and guide you as you move forward
  • Strategic planning, project management, and data governance templates and resources you can use immediately in your own nonprofit
  • Information on available AI training programs and resources to build AI fluency and capacity within nonprofit organizations.
  • Best practices for ensuring AI systems are transparent, accountable, and aligned with the mission and values of nonprofit organizations…(More)”.

The Dangers of AI Nationalism and Beggar-Thy-Neighbour Policies


Paper by Susan Aaronson: “As they attempt to nurture and govern AI, some nations are acting in ways that – with or without direct intent – discriminate among foreign market actors. For example, some governments are excluding foreign firms from access to incentives for high-speed computing, or requiring local content in the AI supply chain, or adopting export controls for the advanced chips that power many types of AI. If policy makers in country X can limit access to the building blocks of AI – whether funds, data or high-speed computing power – it might slow down or limit the AI prowess of its competitors in country Y and/or Z. At the same time, however, such policies could violate international trade norms of non-discrimination. Moreover, if policy makers can shape regulations in ways that benefit local AI competitors, they may also impede the competitiveness of other nations’ AI developers. Such regulatory policies could be discriminatory and breach international trade rules as well as long-standing rules about how nations and firms compete – which, over time, could reduce trust among nations. In this article, the author attempts to illuminate AI nationalism and its consequences by answering four questions:

– What are nations doing to nurture AI capacity within their borders?

Are some of these actions trade distorting?

 – Are some nations adopting twenty-first century beggar thy neighbour policies?

– What are the implications of such trade-distorting actions?

The author finds that AI nationalist policies appear to help countries with the largest and most established technology firms across multiple levels of the AI value chain. Hence, policy makers’ efforts to dominate these sectors, as example through large investment sums or beggar thy neighbour policies are not a good way to build trust…(More)”.