G7 Toolkit for Artificial Intelligence in the Public Sector


Toolkit by OECD: “…a comprehensive guide designed to help policymakers and public sector leaders translate principles for safe, secure, and trustworthy Artificial Intelligence (AI) into actionable policies. AI can help improve the efficiency of internal operations, the effectiveness of policymaking, the responsiveness of public services, and overall transparency and accountability. Recognising both the opportunities and risks posed by AI, this toolkit provides practical insights, shares good practices for the use of AI in and by the public sector, integrates ethical considerations, and provides an overview of G7 trends. It further showcases public sector AI use cases, detailing their benefits, as well as the implementation challenges faced by G7 members, together with the emerging policy responses to guide and coordinate the development, deployment, and use of AI in the public sector. The toolkit finally highlights key stages and factors characterising the journey of public sector AI solutions…(More)”

Exploring New Frontiers of Citizen Participation in the Policy Cycle


OECD Discussion Paper: “… starts from the premise that democracies are endowed with valuable assets and that putting citizens at the heart of policy making offers an opportunity to strengthen democratic resilience. It draws on data, evidence and insights generated through a wide range of work underway at the OECD to identify systemic challenges and propose lines of action for the future. It calls for greater attention to, and investments in, citizen participation in policy making as one of the core functions of the state and the ‘life force’ of democratic governance. In keeping with the OECD’s strong commitment to providing a platform for diverse perspectives on challenging policy issues, it also offers a collection of thoughtprovoking opinion pieces by leading practitioners whose position as elected officials, academics and civil society leaders provides them with a unique vantage point from which to scan the horizon. As a contribution to an evolving field, this Discussion Paper offers neither a prescriptive framework nor a roadmap for governments but represents a step towards reaching a shared understanding of the very real challenges that lie ahead. It is also a timely invitation to all interested actors to join forces and take concerted action to embed meaningful citizen participation in policy making…(More)”.

Cross-border data flows in Africa: Continental ambitions and political realities


Paper by Melody Musoni, Poorva Karkare and Chloe Teevan: “Africa must prioritise data usage and cross-border data sharing to realise the goals of the African Continental Free Trade Area and to drive innovation and AI development. Accessible and shareable data is essential for the growth and success of the digital economy, enabling innovations and economic opportunities, especially in a rapidly evolving landscape.

African countries, through the African Union (AU), have a common vision of sharing data across borders to boost economic growth. However, the adopted continental digital policies are often inconsistently applied at the national level, where some member states implement restrictive measures like data localisation that limit the free flow of data.

The paper looks at national policies that often prioritise domestic interests and how those conflict with continental goals. This is due to differences in political ideologies, socio-economic conditions, security concerns and economic priorities. This misalignment between national agendas and the broader AU strategy is shaped by each country’s unique context, as seen in the examples of Senegal, Nigeria and Mozambique, which face distinct challenges in implementing the continental vision.

The paper concludes with actionable recommendations for the AU, member states and the partnership with the European Union. It suggests that the AU enhances support for data-sharing initiatives and urges member states to focus on policy alignment, address data deficiencies, build data infrastructure and find new ways to use data. It also highlights how the EU can strengthen its support for Africa’s datasharing goals…(More)”.

Understanding local government responsible AI strategy: An international municipal policy document analysis


Paper by Anne David et al: “The burgeoning capabilities of artificial intelligence (AI) have prompted numerous local governments worldwide to consider its integration into their operations. Nevertheless, instances of notable AI failures have heightened ethical concerns, emphasising the imperative for local governments to approach the adoption of AI technologies in a responsible manner. While local government AI guidelines endeavour to incorporate characteristics of responsible innovation and technology (RIT), it remains essential to assess the extent to which these characteristics have been integrated into policy guidelines to facilitate more effective AI governance in the future. This study closely examines local government policy documents (n = 26) through the lens of RIT, employing directed content analysis with thematic data analysis software. The results reveal that: (a) Not all RIT characteristics have been given equal consideration in these policy documents; (b) Participatory and deliberate considerations were the most frequently mentioned responsible AI characteristics in policy documents; (c) Adaptable, explainable, sustainable, and accountable considerations were the least present responsible AI characteristics in policy documents; (d) Many of the considerations overlapped with each other as local governments were at the early stages of identifying them. Furthermore, the paper summarised strategies aimed at assisting local authorities in identifying their strengths and weaknesses in responsible AI characteristics, thereby facilitating their transformation into governing entities with responsible AI practices. The study informs local government policymakers, practitioners, and researchers on the critical aspects of responsible AI policymaking…(More)” See also: AI Localism

Statistical Significance—and Why It Matters for Parenting


Blog by Emily Oster: “…When we say an effect is “statistically significant at the 5% level,” what this means is that there is less than a 5% chance that we’d see an effect of this size if the true effect were zero. (The “5% level” is a common cutoff, but things can be significant at the 1% or 10% level also.) 

The natural follow-up question is: Why would any effect we see occur by chance? The answer lies in the fact that data is “noisy”: it comes with error. To see this a bit more, we can think about what would happen if we studied a setting where we know our true effect is zero. 

My fake study 

Imagine the following (fake) study. Participants are randomly assigned to eat a package of either blue or green M&Ms, and then they flip a (fair) coin and you see if it is heads. Your analysis will compare the number of heads that people flip after eating blue versus green M&Ms and report whether this is “statistically significant at the 5% level.”…(More)”.

Tech Agnostic


Book by Greg Epstein: “…Today’s technology has overtaken religion as the chief influence on twenty-first century life and community. In Tech Agnostic, Harvard and MIT’s influential humanist chaplain Greg Epstein explores what it means to be a critical thinker with respect to this new faith. Encouraging readers to reassert their common humanity beyond the seductive sheen of “tech,” this book argues for tech agnosticism—not worship—as a way of life. Without suggesting we return to a mythical pre-tech past, Epstein shows why we must maintain a freethinking critical perspective toward innovation until it proves itself worthy of our faith or not.

Epstein asks probing questions that center humanity at the heart of engineering: Who profits from an uncritical faith in technology? How can we remedy technology’s problems while retaining its benefits? Showing how unbelief has always served humanity, Epstein revisits the historical apostates, skeptics, mystics, Cassandras, heretics, and whistleblowers who embody the tech reformation we desperately need. He argues that we must learn how to collectively demand that technology serve our pursuit of human lives that are deeply worth living…(More)”.

Key lesson of this year’s Nobel Prize: The importance of unlocking data responsibly to advance science and improve people’s lives


Article by Stefaan Verhulst, Anna Colom, and Marta Poblet: “This year’s Nobel Prize for Chemistry owes a lot to available, standardised, high quality data that can be reused to improve people’s lives. The winners, Prof David Baker from the University of Washington, and Demis Hassabis and John M. Jumper from Google DeepMind, were awarded respectively for the development and prediction of new proteins that can have important medical applications. These developments build on AI models that can predict protein structures in unprecedented ways. However, key to these models and their potential to unlock health discoveries is an open curated dataset with high quality and standardised data, something still rare despite the pace and scale of AI-driven development.

We live in a paradoxical time of both data abundance and data scarcity: a lot of data is being created and stored, but it tends to be inaccessible due to private interests and weak regulations. The challenge, then, is to prevent the misuse of data whilst avoiding its missed use.

The reuse of data remains limited in Europe, but a new set of regulations seeks to increase the possibilities of responsible data reuse. When the European Commission made the case for its European Data Strategy in 2020, it envisaged the European Union “a role model for a society empowered by data to make better decisions — in business and the public sector,” and acknowledged the need to improve “governance structures for handling data and to increase its pools of quality data available for use and reuse”…(More)”.

Discounting the Future: The Ascendency of a Political Technology


Book by Liliana Doganova: “Forest fires, droughts, and rising sea levels beg a nagging question: have we lost our capacity to act on the future? Liliana Doganova’s book sheds new light on this anxious query. It argues that our relationship to the future has been trapped in the gears of a device called discounting. While its incidence remains little known, discounting has long been entrenched in market and policy practices, shaping the ways firms and governments look to the future and make decisions accordingly. Thus, a sociological account of discounting formulas has become urgent.

Discounting means valuing things through the flows of costs and benefits that they are likely to generate in the future, with these future flows being literally dis-counted as they are translated in the present. How have we come to think of the future, and of valuation, in such terms? Building on original empirical research in the historical sociology of discounting, Doganova takes us to some of the sites and moments in which discounting took shape and gained momentum: valuation of European forests in the eighteenth and nineteenth centuries; economic theories devised in the early 1900s; debates over business strategies in the postwar era; investor-state disputes over the nationalization of natural resources; and drug development in the biopharmaceutical industry today. Weaving these threads together, the book pleads for an understanding of discounting as a political technology, and of the future as a contested domain…(More)”

How Artificial Intelligence Can Support Peace


Essay by Adam Zable, Marine Ragnet, Roshni Singh, Hannah Chafetz, Andrew J. Zahuranec, and Stefaan G. Verhulst: “In what follows we provide a series of case studies of how AI can be used to promote peace, leveraging what we learned at the Kluz Prize for PeaceTech and NYU Prep and Becera events. These case studies and applications of AI are limited to what was included in these initiatives and are not fully comprehensive. With these examples of the role of technology before, during, and after a conflict, we hope to broaden the discussion around the potential positive uses of AI in the context of today’s global challenges.

Ai for Peace Blog GraphicThe table above summarizes the how AI may be harnessed throughout the conflict cycle and the supporting examples from the Kluz Prize for PeaceTech and NYU PREP and Becera events

(1) The Use of AI Before a Conflict

AI can support conflict prevention by predicting emerging tensions and supporting mediation efforts. In recent years, AI-driven early warning systems have been used to identify patterns that precede violence, allowing for timely interventions. 

For instance, The Violence & Impacts Early-Warning System (VIEWS), developed by a research consortium at Uppsala University in Sweden and the Peace Research Institute Oslo (PRIO) in Norway, employs AI and machine learning algorithms to analyze large datasets, including conflict history, political events, and socio-economic indicators—supporting negative peace and peacebuilding efforts. These algorithms are trained to recognize patterns that precede violent conflict, using both supervised and unsupervised learning methods to make predictions about the likelihood and severity of conflicts up to three years in advance. The system also uses predictive analytics to identify potential hotspots, where specific factors—such as spikes in political unrest or economic instability—suggest a higher risk of conflict…(More)”.

WikiProject AI Cleanup


Article by Emanuel Maiberg: “A group of Wikipedia editors have formed WikiProject AI Cleanup, “a collaboration to combat the increasing problem of unsourced, poorly-written AI-generated content on Wikipedia.”

The group’s goal is to protect one of the world’s largest repositories of information from the same kind of misleading AI-generated information that has plagued Google search resultsbooks sold on Amazon, and academic journals.

“A few of us had noticed the prevalence of unnatural writing that showed clear signs of being AI-generated, and we managed to replicate similar ‘styles’ using ChatGPT,” Ilyas Lebleu, a founding member of WikiProject AI Cleanup, told me in an email. “Discovering some common AI catchphrases allowed us to quickly spot some of the most egregious examples of generated articles, which we quickly wanted to formalize into an organized project to compile our findings and techniques.”…(More)”.