The case for adaptive and end-to-end policy management


Article by Pia Andrews: “Why should we reform how we do policy? Simple. Because the gap between policy design and delivery has become the biggest barrier to delivering good public services and policy outcomes and is a challenge most public servants experience daily, directly or indirectly.

This gap wasn’t always the case, with policy design and delivery separated as part of the New Public Management reforms in the ’90s. When you also consider the accelerating rate of change, increasing cadence of emergencies, and the massive speed and scale of new technologies, you could argue that end-to-end policy reform is our most urgent problem to solve.

Policy teams globally have been exploring new design methods like human-centred design, test-driven iteration (agile), and multi-disciplinary teams that get policy end users in the room (eg, NSW Policy Lab). There has also been an increased focus on improving policy evaluation across the world (eg, the Australian Centre for Evaluation). In both cases, I’m delighted to see innovative approaches being normalised across the policy profession, but it has become obvious that improving design and/or evaluation is still far from sufficient to drive better (or more humane) policy outcomes in an ever-changing world. It is not only the current systemic inability to detect and respond to unintended consequences that emerge but the lack of policy agility that perpetuates issues even long after they might be identified.

Below I outline four current challenges for policy management and a couple of potential solutions, as something of a discussion starter

Problem 1) The separation of (and mutual incomprehension between) policy design, delivery and the public

The lack of multi-disciplinary policy design, combined with a set-and-forget approach to policy, combined with delivery teams being left to interpret policy instructions without support, combined with a gap and interpretation inconsistency between policy modelling systems and policy delivery systems, all combined with a lack of feedback loops in improving policy over time, has led to a series of black holes throughout the process. Tweaking the process as it currently stands will not fix the black holes. We need a more holistic model for policy design, delivery and management…(More)”.

After USTR’s Move, Global Governance of Digital Trade Is Fraught with Unknowns


Article by Patrick Leblond: “On October 25, the United States announced at the World Trade Organization (WTO) that it was dropping its support for provisions meant to promote the free flow of data across borders. Also abandoned were efforts to continue negotiations on international e-commerce, to protect the source code in applications and algorithms (the so-called Joint Statement Initiative process).

According to the Office of the US Trade Representative (USTR): “In order to provide enough policy space for those debates to unfold, the United States has removed its support for proposals that might prejudice or hinder those domestic policy considerations.” In other words, the domestic regulation of data, privacy, artificial intelligence, online content and the like, seems to have taken precedence over unhindered international digital trade, which the United States previously strongly defended in trade agreements such as the Trans-Pacific Partnership (TPP) and the Canada-United States-Mexico Agreement (CUSMA)…

One pathway for the future sees the digital governance noodle bowl getting bigger and messier. In this scenario, international digital trade suffers. Agreements continue proliferating but remain ineffective at fostering cross-border digital trade: either they remain hortatory with attempts at cooperation on non-strategic issues, or no one pays attention to the binding provisions because business can’t keep up and governments want to retain their “policy space.” After all, why has there not yet been any dispute launched based on binding provisions in a digital trade agreement (either on its own or as part of a larger trade deal) when there has been increasing digital fragmentation?

The other pathway leads to the creation of a new international standards-setting and governance body (call it an International Digital Standards Board), like there exists for banking and finance. Countries that are members of such an international organization and effectively apply the commonly agreed standards become part of a single digital area where they can conduct cross-border digital trade without impediments. This is the only way to realize the G7’s “data free flow with trust” vision, originally proposed by Japan…(More)”.

Steering Responsible AI: A Case for Algorithmic Pluralism


Paper by Stefaan G. Verhulst: “In this paper, I examine questions surrounding AI neutrality through the prism of existing literature and scholarship about mediation and media pluralism. Such traditions, I argue, provide a valuable theoretical framework for how we should approach the (likely) impending era of AI mediation. In particular, I suggest examining further the notion of algorithmic pluralism. Contrasting this notion to the dominant idea of algorithmic transparency, I seek to describe what algorithmic pluralism may be, and present both its opportunities and challenges. Implemented thoughtfully and responsibly, I argue, Algorithmic or AI pluralism has the potential to sustain the diversity, multiplicity, and inclusiveness that are so vital to democracy…(More)”.

Governing the economics of the common good


Paper by Mariana Mazzucato: “To meet today’s grand challenges, economics requires an understanding of how common objectives may be collaboratively set and met. Tied to the assumption that the state can, at best, fix market failures and is always at risk of ‘capture’, economic theory has been unable to offer such a framework. To move beyond such limiting assumptions, the paper provides a renewed conception of the common good, going beyond the classic public good and commons approach, as a way of steering and shaping (rather than just fixing) the economy towards collective goals…(More)”.

A Manifesto on Enforcing Law in the Age of ‘Artificial Intelligence’


Manifesto by the Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of ‘Artificial Intelligence’: “… calls for the effective and legitimate enforcement of laws concerning AI systems. In doing so, we recognise the important and complementary role of standards and compliance practices. Whereas the first manifesto focused on the relationship between democratic law-making and technology, this second manifesto shifts focus from the design of law in the age of AI to the enforcement of law. Concretely, we offer 10 recommendations for addressing the key enforcement challenges shared across transatlantic stakeholders. We call on those who support these recommendations to sign this manifesto…(More)”.

Using AI to support people with disability in the labour market


OECD Report: “People with disability face persisting difficulties in the labour market. There are concerns that AI, if managed poorly, could further exacerbate these challenges. Yet, AI also has the potential to create more inclusive and accommodating environments and might help remove some of the barriers faced by people with disability in the labour market. Building on interviews with more than 70 stakeholders, this report explores the potential of AI to foster employment for people with disability, accounting for both the transformative possibilities of AI-powered solutions and the risks attached to the increased use of AI for people with disability. It also identifies obstacles hindering the use of AI and discusses what governments could do to avoid the risks and seize the opportunities of using AI to support people with disability in the labour market…(More)”.

Can AI solve medical mysteries? It’s worth finding out


Article by Bina Venkataraman: “Since finding a primary care doctor these days takes longer than finding a decent used car, it’s little wonder that people turn to Google to probe what ails them. Be skeptical of anyone who claims to be above it. Though I was raised by scientists and routinely read medical journals out of curiosity, in recent months I’ve gone online to investigate causes of a lingering cough, ask how to get rid of wrist pain and look for ways to treat a bad jellyfish sting. (No, you don’t ask someone to urinate on it.)

Dabbling in self-diagnosis is becoming more robust now that people can go to chatbots powered by large language models scouring mountains of medical literature to yield answers in plain language — in multiple languages. What might an elevated inflammation marker in a blood test combined with pain in your left heel mean? The AI chatbots have some ideas. And researchers are finding that, when fed the right information, they’re often not wrong. Recently, one frustrated mother, whose son had seen 17 doctors for chronic pain, put his medical information into ChatGPT, which accurately suggested tethered cord syndrome — which then led a Michigan neurosurgeon to confirm an underlying diagnosis of spina bifida that could be helped by an operation.

The promise of this trend is that patients might be able to get to the bottom of mysterious ailments and undiagnosed illnesses by generating possible causes for their doctors to consider. The peril is that people may come to rely too much on these tools, trusting them more than medical professionals, and that our AI friends will fabricate medical evidence that misleads people about, say, the safety of vaccines or the benefits of bogus treatments. A question looming over the future of medicine is how to get the best of what artificial intelligence can offer us without the worst.

It’s in the diagnosis of rare diseases — which afflict an estimated 30 million Americans and hundreds of millions of people worldwide — that AI could almost certainly make things better. “Doctors are very good at dealing with the common things,” says Isaac Kohane, chair of the department of biomedical informatics at Harvard Medical School. “But there are literally thousands of diseases that most clinicians will have never seen or even have ever heard of.”..(More)”.

Speak Youth To Power


Blog by The National Democratic Institute: “Under the Speak Youth To Power campaign, NDI has emphasized the importance of young people translating their power to sustained action and influence over political decision-making and democratic processes….

In Turkey, Sosyal Iklim aims to develop a culture of dialogue among young people and to ensure their active participation in social and political life. Board Chair, Gaye Tuğrulöz, shared that her organization is, “… trying to create spaces for young people to see themselves as leaders. We are trying to say that we don’t have to be older to become decision-makers. We are not the leaders of the future. We are not living for the future. We are the leaders and decision-makers of today. Any decisions that are relevant to young people, we want to get involved. We want to establish these spaces where we have a voice.”…

In Libya, members of the Dialogue and Debate Association (DDA), a youth-led partner organization, are working to promote democracy, civic engagement and peaceful societies. DDA works to empower young people to participate in the political process, make their voices heard, and build a better future for Libya through civic education and building skills for dialogue and debate….

The New Generation Girls and Women Development Initiative (NIGAWD), a youth and young women-led organization in Nigeria is working on youth advocacy and policy development, good governance and anti-corruption, elections and human rights. NIGAWD described how youth political participation means the government making spaces to listen to the desires and concerns of young people and allowing them to be part of the policy-making process….(More)”.

Updates to the OECD’s definition of an AI system explained


Article by Stuart Russell: “Obtaining consensus on a definition for an AI system in any sector or group of experts has proven to be a complicated task. However, if governments are to legislate and regulate AI, they need a definition to act as a foundation. Given the global nature of AI, if all governments can agree on the same definition, it allows for interoperability across jurisdictions.

Recently, OECD member countries approved a revised version of the Organisation’s definition of an AI system. We published the definition on LinkedIn, which, to our surprise, received an unprecedented number of comments.

We want to respond better to the interest our community has shown in the definition with a short explanation of the rationale behind the update and the definition itself. Later this year, we can share even more details once they are finalised.

How OECD countries updated the definition

Here are the revisions to the current text of the definition of “AI System” in detail, with additions set out in bold and subtractions in strikethrough):

An AI system is a machine-based system that can, for a given set of human-defined explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as makes predictions, content, recommendations, or decisions that can influenceing physical real or virtual environments. Different AI systems are designed to operate with varying in their levels of autonomy and adaptiveness after deployment…(More)”

Elon Musk is now taking applications for data to study X — but only EU risk researchers need apply…


Article by Natasha Lomas: “Lawmakers take note: Elon Musk-owned X appears to have quietly complied with a hard legal requirement in the European Union that requires larger platforms (aka VLOPs) to provide researchers with data access in order to study systemic risks arising from use of their services — risks such as disinformation, child safety issues, gender-based violence and mental heath concerns.

X (or Twitter as it was still called at the time) was designated a VLOP under the EU’s Digital Services Act (DSA) back in April after the bloc’s regulators confirmed it meets their criteria for an extra layer of rules to kick in that are intended to drive algorithmic accountability via applying transparency measures on larger platforms.

Researchers intending to study systemic risks in the EU now appear to at least be able to apply for access to study X’s data by accessing a web form through a button which appears at the bottom of this page on its developer platform. (Note researchers can be based in the EU but don’t have to be to meet the criteria; they just need to intend to study systemic risks in the EU.)…(More)”.