The world at our fingertips, just out of reach: the algorithmic age of AI


Article by Soumi Banerjee: “Artificial intelligence (AI) has made global movements, testimonies, and critiques seem just a swipe away. The digital realm, powered by machine learning and algorithmic recommendation systems, offers an abundance of visual, textual, and auditory information. With a few swipes or keystrokes, the unbounded world lies open before us. Yet this ‘openness’ conceals a fundamental paradox: the distinction between availability and accessibility.

What is technically available is not always epistemically accessible. What appears global is often algorithmically curated. And what is served to users under the guise of choice frequently reflects the imperatives of engagement, profit, and emotional resonance over critical understanding or cognitive expansion.

The transformative potential of AI in democratising access to information comes with risks. Algorithmic enclosure and content curation can deepen epistemic inequality, particularly for the youth, whose digital fluency often masks a lack of epistemic literacy. What we need is algorithmic transparency, civic education in media literacy, and inclusive knowledge formats…(More)”.

Policy Implications of DeepSeek AI’s Talent Base


Brief by Amy Zegart and Emerson Johnston: “Chinese startup DeepSeek’s highly capable R1 and V3 models challenged prevailing beliefs about the United States’ advantage in AI innovation, but public debate focused more on the company’s training data and computing power than human talent. We analyzed data on the 223 authors listed on DeepSeek’s five foundational technical research papers, including information on their research output, citations, and institutional affiliations, to identify notable talent patterns. Nearly all of DeepSeek’s researchers were educated or trained in China, and more than half never left China for schooling or work. Of the quarter or so that did gain some experience in the United States, most returned to China to work on AI development there. These findings challenge the core assumption that the United States holds a natural AI talent lead. Policymakers need to reinvest in competing to attract and retain the world’s best AI talent while bolstering STEM education to maintain competitiveness…(More)”.

Interoperability and Openness Between Different Governance Models: The Dynamics of Mastodon/Threads and Wikipedia/Google


Article by Aline Blankertz & Svea Windwehr: “Governments, businesses and civil society representatives, among others, call for “alternatives” to compete with and possibly replace big tech platforms. These alternatives are usually characterized by different governance approaches like being not-for-profit, open, free, decentralized and/or community-based. We find that strengthening alternative governance models needs to account for the dynamic effects of operating in a digital ecosystem shaped by ad-driven platforms. Specifically, we explore in this article: 1) how interoperability between the microblogging platforms Threads (by Meta) and Mastodon (a not-for-profit service running on a federated open-source protocol) may foster competition, but also create a risk of converging governance in terms of e.g. content moderation and privacy practices; 2) how openness of the online encyclopedia Wikipedia allows Google Search to appropriate most of the value created by their vertical interaction and how the Wikimedia Foundation seeks to reduce that imbalance; 3) which types of interventions might be suitable to support alternatives without forcing them to emulate big tech governance, including asymmetric interoperability, digital taxes and regulatory restraints on commercial platforms…(More)”.

Governing in the Age of AI: Reimagining Local Government


Report by the Tony Blair Institute for Global Change: “…The limits of the existing operating model have been reached. Starved of resources by cuts inflicted by previous governments over the past 15 years, many councils are on the verge of bankruptcy even though local taxes are at their highest level. Residents wait too long for care, too long for planning applications and too long for benefits; many people never receive what they are entitled to. Public satisfaction with local services is sliding.

Today, however, there are new tools – enabled by artificial intelligence – that would allow councils to tackle these challenges. The day-to-day tasks of local government, whether related to the delivery of public services or planning for the local area, can all be performed faster, better and cheaper with the use of AI – a true transformation not unlike the one seen a century ago.

These tools would allow councils to overturn an operating model that is bureaucratic, labour-intensive and unresponsive to need. AI could release staff from repetitive tasks and relieve an overburdened and demotivated workforce. It could help citizens navigate the labyrinth of institutions, webpages and forms with greater ease and convenience. It could support councils to make better long-term decisions to drive economic growth, without which the resource pressure will only continue to build…(More)”.

The Dangers of AI Nationalism and Beggar-Thy-Neighbour Policies


Paper by Susan Aaronson: “As they attempt to nurture and govern AI, some nations are acting in ways that – with or without direct intent – discriminate among foreign market actors. For example, some governments are excluding foreign firms from access to incentives for high-speed computing, or requiring local content in the AI supply chain, or adopting export controls for the advanced chips that power many types of AI. If policy makers in country X can limit access to the building blocks of AI – whether funds, data or high-speed computing power – it might slow down or limit the AI prowess of its competitors in country Y and/or Z. At the same time, however, such policies could violate international trade norms of non-discrimination. Moreover, if policy makers can shape regulations in ways that benefit local AI competitors, they may also impede the competitiveness of other nations’ AI developers. Such regulatory policies could be discriminatory and breach international trade rules as well as long-standing rules about how nations and firms compete – which, over time, could reduce trust among nations. In this article, the author attempts to illuminate AI nationalism and its consequences by answering four questions:

– What are nations doing to nurture AI capacity within their borders?

Are some of these actions trade distorting?

 – Are some nations adopting twenty-first century beggar thy neighbour policies?

– What are the implications of such trade-distorting actions?

The author finds that AI nationalist policies appear to help countries with the largest and most established technology firms across multiple levels of the AI value chain. Hence, policy makers’ efforts to dominate these sectors, as example through large investment sums or beggar thy neighbour policies are not a good way to build trust…(More)”.

Balancing Data Sharing and Privacy to Enhance Integrity and Trust in Government Programs


Paper by National Academy of Public Administration: “Improper payments and fraud cost the federal government hundreds of billions of dollars each year, wasting taxpayer money and eroding public trust. At the same time, agencies are increasingly expected to do more with less. Finding better ways to share data, without compromising privacy, is critical for ensuring program integrity in a resource-constrained environment.

Key Takeaways

  • Data sharing strengthens program integrity and fraud prevention. Agencies and oversight bodies like GAO and OIGs have uncovered large-scale fraud by using shared data.
  • Opportunities exist to streamline and expedite the compliance processes required by privacy laws and reduce systemic barriers to sharing data across federal agencies.
  • Targeted reforms can address these barriers while protecting privacy:
    1. OMB could issue guidance to authorize fraud prevention as a routine use in System of Records Notices.
    2. Congress could enact special authorities or exemptions for data sharing that supports program integrity and fraud prevention.
    3. A centralized data platform could help to drive cultural change and support secure, responsible data sharing…(More)”

A matter of choice: People and possibilities in the age of AI


UNDP Human Development Report 2025: “Artificial intelligence (AI) has broken into a dizzying gallop. While AI feats grab headlines, they privilege technology in a make-believe vacuum, obscuring what really matters: people’s choices.

The choices that people have and can realize, within ever expanding freedoms, are essential to human development, whose goal is for people to live lives they value and have reason to value. A world with AI is flush with choices the exercise of which is both a matter of human development and a means to advance it.

Going forward, development depends less on what AI can do—not on how human-like it is perceived to be—and more on mobilizing people’s imaginations to reshape economies and societies to make the most of it. Instead of trying vainly to predict what will happen, this year’s Human Development Report asks what choices can be made so that new development pathways for all countries dot the horizon, helping everyone have a shot at thriving in a world with AI…(More)”.

Charting the AI for Good Landscape – A New Look


Article by Perry Hewitt and Jake Porway: “More than 50% of nonprofits report that their organization uses generative AI in day-to-day operations. We’ve also seen an explosion of AI tools and investments. 10% of all the AI companies that exist in the US were founded in 2022, and that number has likely grown in subsequent years.  With investors funneling over $300B into AI and machine learning startups, it’s unlikely this trend will reverse any time soon.

Not surprisingly, the conversation about Artificial Intelligence (AI) is now everywhere, spanning from commercial uses such as virtual assistants and consumer AI to public goods, like AI-driven drug discovery and chatbots for education. The dizzying amount of new AI programs and initiatives – over 5000 new tools listed in 2023 on AI directories like TheresAnAI alone – can make the AI landscape challenging to navigate in general, much less for social impact. Luckily, four years ago, we surveyed the Data and AI for Good landscape and mapped out distinct families of initiatives based on their core goals. Today, we are revisiting that landscape to help folks get a handle on the AI for Good landscape today and to reflect on how the field has expanded, diversified, and matured…(More)”.

Smart Cities:Technologies and Policy Options to Enhance Services and Transparency


GAO Report: “Cities across the nation are using “smart city” technologies like traffic cameras and gunshot detectors to improve public services. In this technology assessment, we looked at their use in transportation and law enforcement.

Experts and city officials reported multiple benefits. For example, Houston uses cameras and Bluetooth sensors to measure traffic flow and adjust signal timing. Other cities use license plate readers to find stolen vehicles.

But the technologies can be costly and the benefits unclear. The data they collect may be sold, raising privacy and civil liberties concerns. We offer three policy options to address such challenges…(More)”.

Understanding and Addressing Misinformation About Science


Report by National Academies of Sciences, Engineering, and Medicine: “Our current information ecosystem makes it easier for misinformation about science to spread and harder for people to figure out what is scientifically accurate. Proactive solutions are needed to address misinformation about science, an issue of public concern given its potential to cause harm at individual, community, and societal levels. Improving access to high-quality scientific information can fill information voids that exist for topics of interest to people, reducing the likelihood of exposure to and uptake of misinformation about science. Misinformation is commonly perceived as a matter of bad actors maliciously misleading the public, but misinformation about science arises both intentionally and inadvertently and from a wide range of sources…(More)”.