Report by Stefaan Verhulst, Adam Zable, Andrew J. Zahuranec, and Peter Addo: “…introduces a practical, community-centered framework for governing data reuse in the development and deployment of artificial intelligence systems in low- and middle-income countries (LMICs). As AI increasingly relies on data from LMICs, affected communities are often excluded from decision-making and see little benefit from how their data is used. This report,…reframes data governance through social licensing—a participatory model that empowers communities to collectively define, document, and enforce conditions for how their data is reused. It offers a step-by-step methodology and actionable tools, including a Social Licensing Questionnaire and adaptable contract clauses, alongisde real-world scenarios and recommendations for enforcement, policy integration, and future research. This report recasts data governance as a collective, continuous process – shifting the focus from individual consent to community decision-making…(More)”.
Paper by Rashid Mushkani, Hugo Berard, Allison Cohen, Shin Koeski: “This paper proposes a Right to AI, which asserts that individuals and communities should meaningfully participate in the development and governance of the AI systems that shape their lives. Motivated by the increasing deployment of AI in critical domains and inspired by Henri Lefebvre’s concept of the Right to the City, we reconceptualize AI as a societal infrastructure, rather than merely a product of expert design. In this paper, we critically evaluate how generative agents, large-scale data extraction, and diverse cultural values bring new complexities to AI oversight. The paper proposes that grassroots participatory methodologies can mitigate biased outcomes and enhance social responsiveness. It asserts that data is socially produced and should be managed and owned collectively. Drawing on Sherry Arnstein’s Ladder of Citizen Participation and analyzing nine case studies, the paper develops a four-tier model for the Right to AI that situates the current paradigm and envisions an aspirational future. It proposes recommendations for inclusive data ownership, transparent design processes, and stakeholder-driven oversight. We also discuss market-led and state-centric alternatives and argue that participatory approaches offer a better balance between technical efficiency and democratic legitimacy…(More)”.
Paper by Joel Z. Leibo et al: “Artificial Intelligence (AI) systems are increasingly placed in positions where their decisions have real consequences, e.g., moderating online spaces, conducting research, and advising on policy. Ensuring they operate in a safe and ethically acceptable fashion is thus critical. However, most solutions have been a form of one-size-fits-all “alignment”. We are worried that such systems, which overlook enduring moral diversity, will spark resistance, erode trust, and destabilize our institutions. This paper traces the underlying problem to an often-unstated Axiom of Rational Convergence: the idea that under ideal conditions, rational agents will converge in the limit of conversation on a single ethics. Treating that premise as both optional and doubtful, we propose what we call the appropriateness framework: an alternative approach grounded in conflict theory, cultural evolution, multi-agent systems, and institutional economics. The appropriateness framework treats persistent disagreement as the normal case and designs for it by applying four principles: (1) contextual grounding, (2) community customization, (3) continual adaptation, and (4) polycentric governance. We argue here that adopting these design principles is a good way to shift the main alignment metaphor from moral unification to a more productive metaphor of conflict management, and that taking this step is both desirable and urgent…(More)”.
Article by Soumi Banerjee: “Artificial intelligence (AI) has made global movements, testimonies, and critiques seem just a swipe away. The digital realm, powered by machine learning and algorithmic recommendation systems, offers an abundance of visual, textual, and auditory information. With a few swipes or keystrokes, the unbounded world lies open before us. Yet this ‘openness’ conceals a fundamental paradox: the distinction between availability and accessibility.
What is technically available is not always epistemically accessible. What appears global is often algorithmically curated. And what is served to users under the guise of choice frequently reflects the imperatives of engagement, profit, and emotional resonance over critical understanding or cognitive expansion.
The transformative potential of AI in democratising access to information comes with risks. Algorithmic enclosure and content curation can deepen epistemic inequality, particularly for the youth, whose digital fluency often masks a lack of epistemic literacy. What we need is algorithmic transparency, civic education in media literacy, and inclusive knowledge formats…(More)”.
Article by Michelle Flores Vryn and Meena Das: “AI can only boost the under-resourced nonprofit world if we design it to serve the communities we care about. But as nonprofits consider how to incorporate AI into their work, many look to expertise from tech sector, expecting tools and implementation advice as well as ethical guidance. Yet when mission-driven entities—with a strong focus on people, communities, and equity—partner solely with tech companies, they may encounter a variety of obstacles, such as:
Limited understanding of community needs: Sector-specific knowledge is essential for aligning AI with nonprofit missions, something many tech companies lack.
Bias in AI models: Without diverse input, AI models may exacerbate biases or misrepresent the communities that nonprofits serve.
Resource constraints: Tech solutions often presume budgets or capacity beyond what nonprofits can bring to bear, creating a reliance on tools that fit the nonprofit context.
We need creative, diverse collaborations across various fields to ensure that technology is deployed in ways that align with nonprofit values, build trust, and serve the greater good. Seeking partners outside of the tech world helps nonprofits develop AI solutions that are context-aware, equitable, and resource-sensitive. Most importantly, nonprofit practitioners must deeply consider our ideal future state: What does an AI-empowered nonprofit sector look like when it truly centers human well-being, community agency, and ethical technology?
Imagining this future means not just reacting to emerging technology but proactively shaping its trajectory. Instead of simply adapting to AI’s capabilities, nonprofits should ask:
What problems do we truly need AI to solve?
Whose voices must be centered in AI decision-making?
How do we ensure AI remains a tool for empowerment rather than control?..(More)”.
Brief by Amy Zegart and Emerson Johnston: “Chinese startup DeepSeek’s highly capable R1 and V3 models challenged prevailing beliefs about the United States’ advantage in AI innovation, but public debate focused more on the company’s training data and computing power than human talent. We analyzed data on the 223 authors listed on DeepSeek’s five foundational technical research papers, including information on their research output, citations, and institutional affiliations, to identify notable talent patterns. Nearly all of DeepSeek’s researchers were educated or trained in China, and more than half never left China for schooling or work. Of the quarter or so that did gain some experience in the United States, most returned to China to work on AI development there. These findings challenge the core assumption that the United States holds a natural AI talent lead. Policymakers need to reinvest in competing to attract and retain the world’s best AI talent while bolstering STEM education to maintain competitiveness…(More)”.
Article by Rebecca Feng and Jason Douglas: “Not long ago, anyone could comb through a wide range of official data from China. Then it started to disappear.
Land sales measures, foreign investment data and unemployment indicators have gone dark in recent years. Data on cremations and a business confidence index have been cut off. Even official soy sauce production reports are gone.
In all, Chinese officials have stopped publishing hundreds of data points once used by researchers and investors, according to a Wall Street Journal analysis.
In most cases, Chinese authorities haven’t given any reason for ending or withholding data. But the missing numbers have come as the world’s second biggest economy has stumbled under the weight of excessive debt, a crumbling real-estate market and other troubles—spurring heavy-handed efforts by authorities to control the narrative.China’s National Bureau of Statistics stopped publishing some numbers related to unemployment in urban areas in recent years. After an anonymous user on the bureau’s website asked why one of those data points had disappeared, the bureau said only that the ministry that provided it stopped sharing the data.
The disappearing data have made it harder for people to know what’s going on in China at a pivotal time, with the trade war between Washington and Beijing expected to hit China hard and weaken global growth. Plunging trade with the U.S. has already led to production shutdowns and job cuts.
Getting a true read on China’s growth has always been tricky. Many economists have long questioned the reliability of China’s headline gross domestic product data, and concerns have intensified recently. Official figures put GDP growth at 5% last year and 5.2% in 2023, but some have estimated that Beijing overstated its numbers by as much as 2 to 3 percentage points.
To get what they consider to be more realistic assessments of China’s growth, economists have turned to alternative sources such as movie box office revenues, satellite data on the intensity of nighttime lights, the operating rates of cement factories and electricity generation by major power companies. Some parse location data from mapping services run by private companies such as Chinese tech giant Baidu to gauge business activity.
One economist said he has been assessing the health of China’s services sector by counting news stories about owners of gyms and beauty salons who abruptly close up and skip town with users’ membership fees…(More)”.
Report by the Tony Blair Institute for Global Change: “…The limits of the existing operating model have been reached. Starved of resources by cuts inflicted by previous governments over the past 15 years, many councils are on the verge of bankruptcy even though local taxes are at their highest level. Residents wait too long for care, too long for planning applications and too long for benefits; many people never receive what they are entitled to. Public satisfaction with local services is sliding.
Today, however, there are new tools – enabled by artificial intelligence – that would allow councils to tackle these challenges. The day-to-day tasks of local government, whether related to the delivery of public services or planning for the local area, can all be performed faster, better and cheaper with the use of AI – a true transformation not unlike the one seen a century ago.
These tools would allow councils to overturn an operating model that is bureaucratic, labour-intensive and unresponsive to need. AI could release staff from repetitive tasks and relieve an overburdened and demotivated workforce. It could help citizens navigate the labyrinth of institutions, webpages and forms with greater ease and convenience. It could support councils to make better long-term decisions to drive economic growth, without which the resource pressure will only continue to build…(More)”.
Book by Nathan Chappell and Scott Rosenkrans: “…an insightful and practical overview of how purpose-driven organizations can use AI to increase their impact and advance their missions. The authors offer an all-encompassing guide to understanding the promise and peril of implementing AI in the nonprofit sector, addressing both the theoretical and hands-on aspects of this necessary transformation.
The book provides you with case studies, practical tools, ethical frameworks and templates you can use to address the challenges of AI adoption – including ethical limitations – head-on. It draws on the authors’ thirty years of combined experience in the nonprofit industry to help you equip your nonprofit stakeholders with the knowledge and tools they need to successfully navigate the AI revolution.
You’ll also find:
Innovative and proven approaches to responsible and beneficial AI implementation taken by real-world organizations that will inspire and guide you as you move forward
Strategic planning, project management, and data governance templates and resources you can use immediately in your own nonprofit
Information on available AI training programs and resources to build AI fluency and capacity within nonprofit organizations.
Best practices for ensuring AI systems are transparent, accountable, and aligned with the mission and values of nonprofit organizations…(More)”.
Paper by Susan Aaronson: “As they attempt to nurture and govern AI, some nations are acting in ways that – with or without direct intent – discriminate among foreign market actors. For example, some governments are excluding foreign firms from access to incentives for high-speed computing, or requiring local content in the AI supply chain, or adopting export controls for the advanced chips that power many types of AI. If policy makers in country X can limit access to the building blocks of AI – whether funds, data or high-speed computing power – it might slow down or limit the AI prowess of its competitors in country Y and/or Z. At the same time, however, such policies could violate international trade norms of non-discrimination. Moreover, if policy makers can shape regulations in ways that benefit local AI competitors, they may also impede the competitiveness of other nations’ AI developers. Such regulatory policies could be discriminatory and breach international trade rules as well as long-standing rules about how nations and firms compete – which, over time, could reduce trust among nations. In this article, the author attempts to illuminate AI nationalism and its consequences by answering four questions:
– What are nations doing to nurture AI capacity within their borders?
– Are some of these actions trade distorting?
– Are some nations adopting twenty-first century beggar thy neighbour policies?
– What are the implications of such trade-distorting actions?
The author finds that AI nationalist policies appear to help countries with the largest and most established technology firms across multiple levels of the AI value chain. Hence, policy makers’ efforts to dominate these sectors, as example through large investment sums or beggar thy neighbour policies are not a good way to build trust…(More)”.