G7 Toolkit for Artificial Intelligence in the Public Sector


OECD Toolkit: “…a comprehensive guide designed to help policymakers and public sector leaders translate principles for safe, secure, and trustworthy Artificial Intelligence (AI) into actionable policies. AI can help improve the efficiency of internal operations, the effectiveness of policymaking, the responsiveness of public services, and overall transparency and accountability. Recognising both the opportunities and risks posed by AI, this toolkit provides practical insights, shares good practices for the use of AI in and by the public sector, integrates ethical considerations, and provides an overview of G7 trends. It further showcases public sector AI use cases, detailing their benefits, as well as the implementation challenges faced by G7 members, together with the emerging policy responses to guide and coordinate the development, deployment, and use of AI in the public sector. The toolkit finally highlights key stages and factors characterising the journey of public sector AI solutions…(More)”.

The Age of AI Nationalism and Its Effects


Paper by Susan Ariel Aaronson: “Policy makers in many countries are determined to develop artificial intelligence (AI) within their borders because they view AI as essential to both national security and economic growth. Some countries have proposed adopting AI sovereignty, where the nation develops AI for its people, by its people and within its borders. In this paper, the author makes a distinction between policies designed to advance domestic AI and policies that, with or without direct intent, hamper the production or trade of foreign-produced AI (known as “AI nationalism”). AI nationalist policies in one country can make it harder for firms in another country to develop AI. If officials can limit access to key components of the AI supply chain, such as data, capital, expertise or computing power, they may be able to limit the AI prowess of competitors in country Y and/or Z. Moreover, if policy makers can shape regulations in ways that benefit local AI competitors, they may also impede the competitiveness of other nations’ AI developers. AI nationalism may seem appropriate given the import of AI, but this paper aims to illuminate how AI nationalistic policies may backfire and could divide the world into AI haves and have nots…(More)”.

Social Systems Evidence


About: “…a continuously updated repository of syntheses of research evidence about the programs, services and products available in a broad range of government sectors and program areas (e.g., climate action, community and social services, economic development and growth, education, environmental conservation, education, housing and transportation) as well as the governance, financial and delivery arrangements within which these programs, services and products are provided, and the implementation strategies that can help to ensure that these programs, services and products get to those who need them. 

The content covers the Sustainable Development Goals, with the exceptions of the health part of goal 3 (which is already well covered by existing databases).

The types of syntheses include evidence briefs for policy, overviews of evidence syntheses, evidence syntheses addressing questions about effectiveness, evidence syntheses addressing other types of questions, evidence syntheses in progress (i.e., protocols for evidence syntheses), and evidence syntheses being planned (i.e., registered titles for evidence syntheses). Social Systems Evidence also contains a continuously updated repository of economic evaluations in these same domains…(More)”

We are Developing AI at the Detriment of the Global South — How a Focus on Responsible Data Re-use Can Make a Difference


Article by Stefaan Verhulst and Peter Addo: “…At the root of this debate runs a frequent concern with how data is collected, stored, used — and responsibly reused for other purposes that initially collected for…

In this article, we propose that promoting responsible reuse of data requires addressing the power imbalances inherent in the data ecology. These imbalances disempower key stakeholders, thereby undermining trust in data management practices. As we recently argued in a report on “responsible data reuse in developing countries,” prepared for Agence Française de Development (AFD), power imbalences may be particularly pernicious when considering the use of data in the Global South. Addressing these requires broadening notions of consent, beyond current highly individualized approaches, in favor of what we instead term a social license for reuse.

In what follows, we explain what a social license means, and propose three steps to help achieve that goal. We conclude by calling for a new research agenda — one that would stretch existing disciplinary and conceptual boundaries — to reimagine what social licenses might mean, and how they could be operationalized…(More)”.

Science Diplomacy and the Rise of Technopoles


Article by Vaughan Turekian and Peter Gluckman: “…Science diplomacy has an important, even existential imperative to help the world reconsider the necessity of working together toward big global goals. Climate change may be the most obvious example of where global action is needed, but many other issues have similar characteristics—deep ocean resources, space, and other ungoverned areas, to name a few.

However, taking up this mantle requires acknowledging why past efforts have failed to meet their goals. The global commitment to Sustainable Development Goals (SDGs) is an example. Weaknesses in the UN system, compounded by varied commitments from member states, will prevent the achievement of the SDGs by 2030. This year’s UN Summit of the Future is intended to reboot the global commitment to the sustainability agenda. Regardless of what type of agreement is signed at the summit, its impact may be limited.  

Science diplomacy has an important, even existential imperative to help the world reconsider the necessity of working together toward big global goals.

The science community must play an active part in ensuring progress is in fact made, but that will require an expansion of the community’s current role. To understand what this might mean, consider that the Pact for the Future agreed in New York City in September 2024 places “science, technology, and innovation” as one of its five themes. But that becomes actionable either in the narrow sense that technology will provide “answers” to global problems or in the platitudinous sense that science provides advice that is not acted upon. This dichotomy of unacceptable approaches has long bedeviled science’s influence.

For the world to make better use of science, science must take on an expanded responsibility in solving problems at both global and local scales. And science itself must become part of a toolkit—both at the practical and the diplomatic level—to address the sorts of challenges the world will face in the future. To make this happen, more countries must make science diplomacy a core part of their agenda by embedding science advisors within foreign ministries, connecting diplomats to science communities.

As the pace of technological change generates both existential risk and economic, environmental, and social opportunities, science diplomacy has a vital task in balancing outcomes for the benefit of more people. It can also bring the science community (including the social sciences and humanities) to play a critical role alongside nation states. And, as new technological developments enable nonstate actors, and especially the private sector, science diplomacy has an important role to play in helping nation states develop policy that can identify common solutions and engage key partners…(More)”.

How The New York Times incorporates editorial judgment in algorithms to curate its home page


Article by Zhen Yang: “Whether on the web or the app, the home page of The New York Times is a crucial gateway, setting the stage for readers’ experiences and guiding them to the most important news of the day. The Times publishes over 250 stories daily, far more than the 50 to 60 stories that can be featured on the home page at a given time. Traditionally, editors have manually selected and programmed which stories appear, when and where, multiple times daily. This manual process presents challenges:

  • How can we provide readers a relevant, useful, and fresh experience each time they visit the home page?
  • How can we make our editorial curation process more efficient and scalable?
  • How do we maximize the reach of each story and expose more stories to our readers?

To address these challenges, the Times has been actively developing and testing editorially driven algorithms to assist in curating home page content. These algorithms are editorially driven in that a human editor’s judgment or input is incorporated into every aspect of the algorithm — including deciding where on the home page the stories are placed, informing the rankings, and potentially influencing and overriding algorithmic outputs when necessary. From the get-go, we’ve designed algorithmic programming to elevate human curation, not to replace it…

The Times began using algorithms for content recommendations in 2011 but only recently started applying them to home page modules. For years, we only had one algorithmically-powered module, “Smarter Living,” on the home page, and later, “Popular in The Times.” Both were positioned relatively low on the page.

Three years ago, the formation of a cross-functional team — including newsroom editors, product managers, data scientists, data analysts, and engineers — brought the momentum needed to advance our responsible use of algorithms. Today, nearly half of the home page is programmed with assistance from algorithms that help promote news, features, and sub-brand content, such as The Athletic and Wirecutter. Some of these modules, such as the features module located at the top right of the home page on the web version, are in highly visible locations. During major news moments, editors can also deploy algorithmic modules to display additional coverage to complement a main module of stories near the top of the page. (The topmost news package of Figure 1 is an example of this in action.)…(More)”

How is editorial judgment incorporated into algorithmic programming?

Data-driven decisions: the case for randomised policy trials


Speech by Andrew Leigh: “…In 1747, 31-year-old Scottish naval surgeon James Lind set about determining the most effective treatment for scurvy, a disease that was killing thousands of sailors around the world. Selecting 12 sailors suffering from scurvy, Lind divided them into six pairs. Each pair received a different treatment: cider; sulfuric acid; vinegar; seawater; a concoction of nutmeg, garlic and mustard; and two oranges and a lemon. In less than a week, the pair who had received oranges and lemons were back on active duty, while the others languished. Given that sulphuric acid was the British Navy’s main treatment for scurvy, this was a crucial finding.

The trial provided robust evidence for the powers of citrus because it created a credible counterfactual. The sailors didn’t choose their treatments, nor were they assigned based on the severity of their ailment. Instead, they were randomly allocated, making it likely that difference in their recovery were due to the treatment rather than other characteristics.

Lind’s randomised trial, one of the first in history, has attained legendary status. Yet because 1747 was so long ago, it is easy to imagine that the methods he used are no longer applicable. After all, Lind’s research was conducted at a time before electricity, cars and trains, an era when slavery was rampant and education was reserved for the elite. Surely, some argue, ideas from such an age have been superseded today.

In place of randomised trials, some put their faith in ‘big data’. Between large-scale surveys and extensive administrative datasets, the world is awash in data as never before. Each day, hundreds of exabytes of data are produced. Big data has improved the accuracy of weather forecasts, permitted researchers to study social interactions across racial and ethnic lines, enabled the analysis of income mobility at a fine geographic scale and much more…(More)”

Citizen scientists will be needed to meet global water quality goals


University College London: “Sustainable development goals for water quality will not be met without the involvement of citizen scientists, argues an international team led by a UCL researcher, in a new policy brief.

The policy brief and attached technical brief are published by Earthwatch Europe on behalf of the United Nations Environment Program (UNEP)-coordinated World Water Quality Alliance that has supported citizen science projects in Kenya, Tanzania and Sierra Leone. The reports detail how policymakers can learn from examples where citizen scientists (non-professionals engaged in the scientific process, such as by collecting data) are already making valuable contributions.

The report authors focus on how to meet one of the UN’s Sustainable Development Goals around improving water quality, which the UN states is necessary for the health and prosperity of people and the planet…

“Locals who know the water and use the water are both a motivated and knowledgeable resource, so citizen science networks can enable them to provide large amounts of data and act as stewards of their local water bodies and sources. Citizen science has the potential to revolutionize the way we manage water resources to improve water quality.”…

The report authors argue that improving water quality data will require governments and organizations to work collaboratively with locals who collect their own data, particularly where government monitoring is scarce, but also where there is government support for citizen science schemes. Water quality improvement has a particularly high potential for citizen scientists to make an impact, as professionally collected data is often limited by a shortage of funding and infrastructure, while there are effective citizen science monitoring methods that can provide reliable data.

The authors write that the value of citizen science goes beyond the data collected, as there are other benefits pertaining to education of volunteers, increased community involvement, and greater potential for rapid response to water quality issues…(More)”.

Scientists around the world call to protect research on one of humanity’s greatest short-term threats – Disinformation


Forum on Democracy and Information: “At a critical time for understanding digital communications’ impact on societies, research on disinformation is endangered. 

In August, researchers around the world bid farewell to CrowdTangle – the Meta-owned social media monitoring tool. The decision by Meta to close the number one platform used to track mis- and disinformation, in what is a major election year, only to present its alternative tool Meta Content Library and API, has been met with a barrage of criticism.

If, as suggested by the World Economic Forum’s 2024 global risk report, disinformation is one of the biggest short-term threats to humanity, our collective ability to understand how it spreads and impacts our society is crucial. Just as we would not impede scientific research into the spread of viruses and disease, nor into natural ecosystems or other historical and social sciences, disinformation research must be permitted to be carried out unimpeded and with access to information needed to understand its complexity. Understanding the political economy of disinformation as well as its technological dimensions is also a matter of public health, democratic resilience, and national security.

By directly affecting the research community’s ability to open social media black boxes, this radical decision will also, in turn, hamper public understanding of how technology affects democracy. Public interest scrutiny is also essential for the next era of technology, notably for the world’s largest AI systems, which are similarly proprietary and opaque. The research community is already calling on AI companies to learn from the mistakes of social media and guarantee protections for good faith research. The solution falls on multiple shoulders and the global scientific community, civil society, public institutions and philanthropies must come together to meaningfully foster and protect public interest research on information and democracy…(More)”.

Leveraging AI for Democracy: Civic Innovation on the New Digital Playing Field


Report by Beth Kerley, Carl Miller, and Fernanda Campagnucci: “Like social media before them, new AI tools promise to change the game when it comes to civic engagement. These technologies offer bold new possibilities for investigative journalists, anticorruption advocates, and others working with limited resources to advance democratic norms.

Yet the transformation wrought by AI advances is far from guaranteed to work in democracy’s favor. Potential threats to democracy from AI have drawn wide attention. To better the odds for prodemocratic actors in a fluid technological environment, systematic thinking about how to make AI work for democracy is needed.

The essays in this report outline possible paths toward a prodemocratic vision for AI. An overview essay by Beth Kerley based on insights from an International Forum for Democratic Studies expert workshop reflects on the critical questions that confront organizations seeking to deploy AI tools. Fernanda Campagnucci, spotlighting the work of Open Knowledge Brasil to open up government data, explores how AI advances are creating new opportunities for citizens to scrutinize public information. Finally, Demos’s Carl Miller sheds light on how AI technologies that enable new forms of civic deliberation might change the way we think about democratic participation itself…(More)“.