Artificial Intelligence for the Internal Democracy of Political Parties


Paper by Claudio Novelli et al: “The article argues that AI can enhance the measurement and implementation of democratic processes within political parties, known as Intra-Party Democracy (IPD). It identifies the limitations of traditional methods for measuring IPD, which often rely on formal parameters, self-reported data, and tools like surveys. Such limitations lead to partial data collection, rare updates, and significant resource demands. To address these issues, the article suggests that specific data management and Machine Learning techniques, such as natural language processing and sentiment analysis, can improve the measurement and practice of IPD…(More)”.

Toward a citizen science framework for public policy evaluation


Paper by Giovanni Esposito et al: “This study pioneers the use of citizen science in evaluating Freedom of Information laws, with a focus on Belgium, where since its 1994 enactment, Freedom of Information’s effectiveness has remained largely unexamined. Utilizing participatory methods, it engages citizens in assessing transparency policies, significantly contributing to public policy evaluation methodology. The research identifies regional differences in Freedom of Information implementation across Belgian municipalities, highlighting that larger municipalities handle requests more effectively, while administrations generally show reluctance to respond to requests from perceived knowledgeable individuals. This phenomenon reflects a broader European caution toward well-informed requesters. By integrating citizen science, this study not only advances our understanding of Freedom of Information law effectiveness in Belgium but also advocates for a more inclusive, collaborative approach to policy evaluation. It addresses the gap in researchers’ experience with citizen science, showcasing its vast potential to enhance participatory governance and policy evaluation…(More)”.

Definitions, digital, and distance: on AI and policymaking


Article by Gavin Freeguard: “Our first question is less, ‘to what extent can AI improve public policymaking?’, but ‘what is currently wrong with policymaking?’, and then, ‘is AI able to help?’.

Ask those in and around policymaking about the problems and you’ll get a list likely to include:

  • the practice not having changed in decades (or centuries)
  • it being an opaque ‘dark art’ with little transparency
  • defaulting to easily accessible stakeholders and evidence
  • a separation between policy and delivery (and digital and other disciplines), and failure to recognise the need for agility and feedback as opposed to distinct stages
  • the challenges in measuring or evaluating the impact of policy interventions and understanding what works, with a lack of awareness, let alone sharing, of case studies elsewhere
  • difficulties in sharing data
  • the siloed nature of government complicating cross-departmental working
  • policy asks often being dictated by politics, with electoral cycles leading to short-termism, ministerial churn changing priorities and personal style, events prompting rushed reactions, or political priorities dictating ‘policy-based evidence making’
  • a rush to answers before understanding the problem
  • definitional issues about what policy actually is making it hard to get a hold of or develop professional expertise.  

If we’re defining ‘policy’ and the problem, we also need to define ‘AI’, or at least acknowledge that we are not only talking about new, shiny generative AI, but a world of other techniques for automating processes and analysing data that have been used in government for years.

So is ‘AI’ able to help? It could support us to make better use of a wider range of data more quickly; but it could privilege that which is easier to measure, strip data of vital context, and embed biases and historical assumptions. It could ‘make decisions more transparent (perhaps through capturing digital records of the process behind them, or by visualising the data that underpins a decision)’; or make them more opaque with ‘black-box’ algorithms, and distract from overcoming the very human cultural problems around greater openness. It could help synthesise submissions or generate ideas to brainstorm; or fail to compensate for deficiencies in underlying government knowledge infrastructure, and generate gibberish. It could be a tempting silver bullet for better policy; or it could paper over the cracks, while underlying technical, organisational and cultural plumbing goes unfixed. It could have real value in some areas, or cause harms in others…(More)”.

Nexus: A Brief History of Information Networks From the Stone Age to AI 


Book by Yuval Noah Harari: “For the last 100,000 years, we Sapiens have accumulated enormous power. But despite all our discoveries, inventions, and conquests, we now find ourselves in an existential crisis. The world is on the verge of ecological collapse. Misinformation abounds. And we are rushing headlong into the age of AI—a new information network that threatens to annihilate us. For all that we have accomplished, why are we so self-destructive?

Nexus looks through the long lens of human history to consider how the flow of information has shaped us, and our world. Taking us from the Stone Age, through the canonization of the Bible, early modern witch-hunts, Stalinism, Nazism, and the resurgence of populism today, Yuval Noah Harari asks us to consider the complex relationship between information and truth, bureaucracy and mythology, wisdom and power. He explores how different societies and political systems throughout history have wielded information to achieve their goals, for good and ill. And he addresses the urgent choices we face as non-human intelligence threatens our very existence.
 
Information is not the raw material of truth; neither is it a mere weapon. Nexus explores the hopeful middle ground between these extremes, and in doing so, rediscovers our shared humanity…(More)”.

Advocating an International Decade for Data under G20 Sponsorship


G20 Policy Brief by Lorrayne Porciuncula, David Passarelli, Muznah Siddiqui, and Stefaan Verhulst: “This brief draws attention to the important role of data in social and economic development. It advocates the establishment of an International Decade for Data (IDD) from 2025-2035 under G20 sponsorship. The IDD can be used to bridge existing data governance initiatives and deliver global ambitions to use data for social impact, innovation, economic growth, research, and social development. Despite the critical importance of data governance to achieving the SDGs and to emerging topics such as artificial intelligence, there is no unified space that brings together stakeholders to coordinate and shape the data dimension of digital societies.

While various data governance processes exist, they often operate in silos, without effective coordination and interoperability. This fragmented landscape inhibits progress toward a more inclusive and sustainable digital future. The envisaged IDD fosters an integrated approach to data governance that supports all stakeholders in navigating complex data landscapes. Central to this proposal are new institutional frameworks (e.g. data collaboratives), mechanisms (e.g. digital social licenses and sandboxes), and professional domains (e.g. data stewards), that can respond to the multifaceted issue of data governance and the multiplicity of actors involved.

The G20 can capitalize on the Global Digital Compact’s momentum and create a task force to position itself as a data champion through the launch of the IDD, enabling collective progress and steering global efforts towards a more informed and responsible data-centric society…(More)”.

Frontier AI: double-edged sword for public sector


Article by Zeynep Engin: “The power of the latest AI technologies, often referred to as ‘frontier AI’, lies in their ability to automate decision-making by harnessing complex statistical insights from vast amounts of unstructured data, using models that surpass human understanding. The introduction of ChatGPT in late 2022 marked a new era for these technologies, making advanced AI models accessible to a wide range of users, a development poised to permanently reshape how our societies function.

From a public policy perspective, this capacity offers the optimistic potential to enable personalised services at scale, potentially revolutionising healthcare, education, local services, democratic processes, and justice, tailoring them to everyone’s unique needs in a digitally connected society. The ambition is to achieve better outcomes than humanity has managed so far without AI assistance. There is certainly a vast opportunity for improvement, given the current state of global inequity, environmental degradation, polarised societies, and other chronic challenges facing humanity.

However, it is crucial to temper this optimism with recognising the significant risks. In their current trajectories, these technologies are already starting to undermine hard-won democratic gains and civil rights. Integrating AI into public policy and decision-making processes risks exacerbating existing inequalities and unfairness, potentially leading to new, uncontrollable forms of discrimination at unprecedented speed and scale. The environmental impacts, both direct and indirect, could be catastrophic, while the rise of AI-powered personalised misinformation and behavioural manipulation is contributing to increasingly polarised societies.

Steering the direction of AI to be in the public interest requires a deeper understanding of its characteristics and behaviour. To imagine and design new approaches to public policy and decision-making, we first need a comprehensive understanding of what this remarkable technology offers and its potential implications…(More)”.

Policies must be justified by their wellbeing-to-cost ratio


Article by Richard Layard: “…What is its value for money — that is, how much wellbeing does it deliver per (net) pound it costs the government? This benefit/cost ratio (or BCR) should be central to every discussion.

The science exists to produce these numbers and, if the British government were to require them of the spending departments, it would be setting an example of rational government to the whole world.

Such a move would, of course, lead to major changes in priorities. At the London School of Economics we have been calculating the benefits and costs of policies across a whole range of government departments.

In our latest report on value for money, the best policies are those that save the government more money than they cost — for example by getting people back to work. Classic examples of this are treatments for mental health. The NHS Talking Therapies programme now treats 750,000 people a year for anxiety disorders and depression. Half of them recover and the service demonstrably pays for itself. It needs to expand.

But we also need a parallel service for those addicted to alcohol, drugs and gambling. These individuals are more difficult to treat — but the savings if they recover are greater. Again, it will pay for itself. And so will the improved therapy service for children and young people that Labour has promised.

However, most spending policies do cost more than they save. For these it is crucial to measure the benefit/cost ratio, converting the wellbeing benefit into its monetary equivalent. For example, we can evaluate the wellbeing gain to a community of having more police and subsequently less crime. Once this is converted into money, we calculate that the benefit/cost ratio is 12:1 — very high…(More)”.

Breaking the Wall of Digital Heteronomy


Interview with Julia Janssen: “The walls of algorithms increasingly shape your life. Telling what to buy, where to go, what news to believe or songs to listen to. Data helps to navigate the world’s complexity and its endless possibilities. Artificial intelligence promises frictionless experiences, tailored and targeted, seamless and optimized to serve you best. But, at what cost? Frictionlessness comes with obedience. To the machine, the market and your own prophesy.

Mapping the Oblivion researches the influence of data and AI on human autonomy. The installation visualized Netflix’s percentage-based prediction models to provoke questions about to what extent we want to quantify choices. Will you only watch movies that are over 64% to your liking? Dine at restaurants that match your appetite above 76%. Date people with a compatibility rate of 89%? Will you never choose the career you want when there is only a 12% chance you’ll succeed? Do you want to outsmart your intuition with systems you do not understand and follow the map of probabilities and statistics?

Digital heteronomy is a condition in which one is guided by data, governed by AI and ordained by the industry. Homo Sapiens, the knowing being becomes Homo Stultus, the controllable being.

Living a quantified life in a numeric world. Not having to choose, doubt or wonder. Kept safe, risk-free and predictable within algorithmic walls. Exhausted of autonomy, creativity and randomness. Imprisoned in bubbles, profiles and behavioural tribes. Controllable, observable and monetizable.

Breaking the wall of digital heteronomy means taking back control over our data, identity, choices and chances in life. Honouring the unexpected, risk, doubt and having an unknown future. Shattering the power structures created by Big Tech to harvest information and capitalize on unfairness, vulnerabilities and fears. Breaking the wall of digital heteronomy means breaking down a system where profit is more important than people…(More)”.

The Imperial Origins of Big Data


Blog and book by Asheesh Kapur Siddique: “We live in a moment of massive transformation in the nature of information. In 2020, according to one report, users of the Internet created 64.2 zetabytes of data, a quantity greater than the “number of detectable stars in the cosmos,” a colossal increase whose origins can be traced to the emergence of the World Wide Web in 1993.1 Facilitated by technologies like satellites, smartphones, and artificial intelligence, the scale and speed of data creation seems like it may only balloon over the rest of our lifetimes—and with it, the problem of how to govern ourselves in relation to the inequalities and opportunities that the explosion of data creates.

But while much about our era of big data is indeed revolutionary, the political questions that it raises—How should information be used? Who should control it? And how should it be preserved?—are ones with which societies have long grappled. These questions attained a particular importance in Europe from the eleventh century due to a technological change no less significant than the ones we are witnessing today: the introduction of paper into Europe. Initially invented in China, paper travelled to Europe via the conduit of Islam around the eleventh century after the Moors conquered Spain. Over the twelfth, thirteenth, and fourteenth centuries, paper emerged as the fundamental substrate which politicians, merchants, and scholars relied on to record and circulate information in governance, commerce, and learning. At the same time, governing institutions sought to preserve and control the spread of written information through the creation of archives: repositories where they collected, organized, and stored documents.

The expansion of European polities overseas from the late fifteenth century onward saw governments massively scale up their use of paper—and confront the challenge of controlling its dissemination across thousands of miles of ocean and land. These pressures were felt particularly acutely in what eventually became the largest empire in world history, the British empire. As people from the British isles from the early seventeenth century fought, traded, and settled their way to power in the Atlantic world and South Asia, administrators faced the problem of how to govern both their emigrating subjects and the non-British peoples with whom they interacted. This meant collecting information about their behavior through the technology of paper. Just as we struggle to organize, search, and control our email boxes, text messages, and app notifications, so too did these early moderns confront the attendant challenges of developing practices of collection and storage to manage the resulting information overload. And despite the best efforts of states and companies to control information, it constantly escaped their grasp, falling into the hands of their opponents and rivals who deployed it to challenge and contest ruling powers.

The history of the early modern information state offers no simple or straightforward answers to the questions that data raises for us today. But it does remind us of a crucial truth, all too readily obscured by the deluge of popular narratives glorifying technological innovation: that questions of data are inherently questions about politics—about who gets to collect, control, and use information, and the ends to which information should be put. We should resist any effort to insulate data governance from democratic processes—and having an informed perspective on the politics of data requires that we attend not just to its present, but also to its past…(More)”.

The Power of Supercitizens


Blog by Brian Klaas: “Lurking among us, there are a group of hidden heroes, people who routinely devote significant amounts of their time, energy, and talent to making our communities better. These are the devoted, do-gooding, elite one percent. Most, but not all, are volunteers.1 All are selfless altruists. They, the supercitizens, provide some of the stickiness in the social glue that holds us together.2

What if I told you that there’s this little trick you can do that makes your community stronger, helps other people, and makes you happier and live longer? Well, it exists, there’s ample evidence it works, and best of all, it’s free.

Recently published research showcases a convincing causal link between these supercitizens—devoted, regular volunteers—and social cohesion. While such an umbrella term means a million different things, these researchers focused on two UK-based surveys that analyzed three facets of social cohesion, measured through eight questions (respondents answered on a five point scale, ranging from strongly disagree to strongly agree). They were:


Neighboring

  • ‘If I needed advice about something I could go to someone in my neighborhood’;
  • ‘I borrow things and exchange favors with my neighbors’; and
  • ‘I regularly stop and talk with people in my neighborhood’

Psychological sense of community

  • ‘I feel like I belong to this neighborhood’;
  • ‘The friendships and associations I have with other people in my neighborhood mean a lot to me’;
  • ‘I would be willing to work together with others on something to improve my neighborhood’; and
  • ‘I think of myself as similar to the people that live in this neighborhood’)

Attraction to the neighborhood

  • ‘I plan to remain a resident of this neighborhood for a number of years’

While these questions only tap into some specific components of social cohesion, high levels of these ingredients are likely to produce a reliable recipe for a healthy local community. (Social cohesion differs from social capital, popularized by Robert Putnam and his book, Bowling Alone. Social capital tends to focus on links between individuals and groups—are you a joiner or more of a loner?—whereas cohesion refers to a more diffuse sense of community, belonging, and neighborliness)…(More)”.