Who rules the deliberative party? Examining the Agora case in Belgium


Paper by Nino Junius and Joke Matthieu: “In recent years, pessimism about plebiscitary intra-party democracy has been challenged by assembly-based models of intra-party democracy. However, research has yet to explore the emergence of new power dynamics in parties originating from the implementation of deliberative practices in their intra-party democracy. We investigate how deliberative democratization reshuffles power relations within political parties through a case study of Agora, an internally deliberative movement party in Belgium. Employing a process-tracing approach using original interview and participant observation data, we argue that while plebiscitary intra-party democracy shifts power towards passive members prone to elite domination, our case suggests that deliberative intra-party democracy shifts power towards active members that are more likely to be critical of elites…(More)”

Cutting through complexity using collective intelligence


Blog by the UK Policy Lab: “In November 2021 we established a Collective Intelligence Lab (CILab), with the aim of improving policy outcomes by tapping into collective intelligence (CI). We define CI as the diversity of thought and experience that is distributed across groups of people, from public servants and domain experts to members of the public. We have been experimenting with a digital tool, Pol.is, to capture diverse perspectives and new ideas on key government priority areas. To date we have run eight debates on issues as diverse as Civil Service modernisation, fisheries management and national security. Across these debates over 2400 civil servants, subject matter experts and members of the public have participated…

From our experience using CILab on live policy issues, we have identified a series of policy use cases that echo findings from the government of Taiwan and organisations such as Nesta. These use cases include: 1) stress-testing existing policies and current thinking, 2) drawing out consensus and divergence on complex, contentious issues, and 3) identifying novel policy ideas

1) Stress-testing existing policy and current thinking

CI could be used to gauge expert and public sentiment towards existing policy ideas by asking participants to discuss existing policies and current thinking on Pol.is. This is well suited to testing public and expert opinions on current policy proposals, especially where their success depends on securing buy-in and action from stakeholders. It can also help collate views and identify barriers to effective implementation of existing policy.

From the initial set of eight CILab policy debates, we have learnt that it is sometimes useful to design a ‘crossover point’ into the process. This is where part way through a debate, statements submitted by policymakers, subject matter experts and members of the public can be shown to each other, in a bid to break down groupthink across those groups. We used this approach in a Pol.is debate on a topic relating to UK foreign policy, and think it could help test how existing policies on complex areas such as climate change or social care are perceived within and outside government…(More)”

“Can AI bring deliberative democracy to the masses?”


Paper by Hélène Landemore: “A core problem in deliberative democracy is the tension between two seemingly equally important conditions of democratic legitimacy: deliberation on the one hand and mass participation on the other. Might artificial intelligence help bring quality deliberation to the masses? The paper first examines the conundrum in deliberative democracy around the tradeoff between deliberation and mass participation by returning to the seminal debate between Joshua Cohen and Jürgen Habermas about the proper model of deliberative democracy. It then turns to an analysis of the 2019 French Great National Debate, a low-tech attempt to involve millions of French citizens in a structured exercise of collective deliberation over a two-month period. Building on the shortcomings of this empirical attempt, the paper then considers two different visions for an algorithm-powered scaled-up form of mass deliberation—Mass Online Deliberation on the one hand and a multiplicity of rotating randomly selected mini-publics on the other—theorizing various ways Artificial Intelligence could play a role in either of them…(More)”.

The European Union-U.S. Data Privacy Framework


White House Fact Sheet: “Today, President Biden signed an Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities (E.O.) directing the steps that the United States will take to implement the U.S. commitments under the European Union-U.S. Data Privacy Framework (EU-U.S. DPF) announced by President Biden and European Commission President von der Leyen in March of 2022. 

Transatlantic data flows are critical to enabling the $7.1 trillion EU-U.S. economic relationship.  The EU-U.S. DPF will restore an important legal basis for transatlantic data flows by addressing concerns that the Court of Justice of the European Union raised in striking down the prior EU-U.S. Privacy Shield framework as a valid data transfer mechanism under EU law. 

The Executive Order bolsters an already rigorous array of privacy and civil liberties safeguards for U.S. signals intelligence activities. It also creates an independent and binding mechanism enabling individuals in qualifying states and regional economic integration organizations, as designated under the E.O., to seek redress if they believe their personal data was collected through U.S. signals intelligence in a manner that violated applicable U.S. law.

U.S. and EU companies large and small across all sectors of the economy rely upon cross-border data flows to participate in the digital economy and expand economic opportunities. The EU-U.S. DPF represents the culmination of a joint effort by the United States and the European Commission to restore trust and stability to transatlantic data flows and reflects the strength of the enduring EU-U.S. relationship based on our shared values…(More)”.

Can critical policy studies outsmart AI? Research agenda on artificial intelligence technologies and public policy


Paper by Regine Paul: “The insertion of artificial intelligence technologies (AITs) and data-driven automation in public policymaking should be a metaphorical wake-up call for critical policy analysts. Both its wide representation as techno-solutionist remedy in otherwise slow, inefficient, and biased public decision-making and its regulation as a matter of rational risk analysis are conceptually flawed and democratically problematic. To ‘outsmart’ AI, this article stimulates the articulation of a critical research agenda on AITs and public policy, outlining three interconnected lines of inquiry for future research: (1) interpretivist disclosure of the norms and values that shape perceptions and uses of AITs in public policy, (2) exploration of AITs in public policy as a contingent practice of complex human-machine interactions, and (3) emancipatory critique of how ‘smart’ governance projects and AIT regulation interact with (global) inequalities and power relations…(More)”.

How one group of ‘fellas’ is winning the meme war in support of Ukraine


Article by Suzanne Smalley: “The North Atlantic Fella Organization, or NAFO, has arrived.

Ukraine’s Defense Ministry celebrated the group on Twitter for waging a “fierce fight” against Kremlin trolls. And Rep. Adam Kinzinger, D-Ill., tweeted that he was “self-declaring as a proud member of #NAFO” and “the #fellas shall prevail.”

The brainchild of former Marine Matt Moores, NAFO launched in May and quickly blew up on Twitter. It’s become something of a movement, drawing support in military and cybersecurity circles who circulate its meme backing Ukraine in its war against Russia.

“The power of what we’re doing is that instead of trying to come in and point-by-point refute, and argue about what’s true and what isn’t, it’s coming and saying, ‘Hey, that’s dumb,’” Moores said during a panel on Wednesday at the Center for International and Strategic Studies in Washington. “And the moment somebody’s replying to a cartoon dog online, you’ve lost if you work for the government of Russia.”

Memes have figured heavily in the information war following the Russian invasion. The Ukrainian government has proven eager to highlight memes on agency websites and officials have been known to personally thank online communities that spread anti-Russian memes. The NAFO meme shared by the defense ministry in August showed a Shiba Inu dog in a military uniform appearing to celebrate a missile launch.

The Shiba Inu has long been a motif in internet culture. According to Vice’s Motherboard, the use of Shiba Inu to represent a “fella” waging online war against the Russians dates to at least May when an artist started rewarding fellas who donated money to the Georgian Legion by creating customized fella art for online use…(More)”.

The EU wants to put companies on the hook for harmful AI


Article by Melissa Heikkilä: “The EU is creating new rules to make it easier to sue AI companies for harm. A bill unveiled this week, which is likely to become law in a couple of years, is part of Europe’s push to prevent AI developers from releasing dangerous systems. And while tech companies complain it could have a chilling effect on innovation, consumer activists say it doesn’t go far enough. 

Powerful AI technologies are increasingly shaping our lives, relationships, and societies, and their harms are well documented. Social media algorithms boost misinformation, facial recognition systems are often highly discriminatory, and predictive AI systems that are used to approve or reject loans can be less accurate for minorities.  

The new bill, called the AI Liability Directive, will add teeth to the EU’s AI Act, which is set to become EU law around the same time. The AI Act would require extra checks for “high risk” uses of AI that have the most potential to harm people, including systems for policing, recruitment, or health care. 

The new liability bill would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers, and users of the technologies accountable, and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.

For example, job seekers who can prove that an AI system for screening résumés discriminated against them can ask a court to force the AI company to grant them access to information about the system so they can identify those responsible and find out what went wrong. Armed with this information, they can sue. 

The proposal still needs to snake its way through the EU’s legislative process, which will take a couple of years at least. It will be amended by members of the European Parliament and EU governments and will likely face intense lobbying from tech companies, which claim that such rules could have a “chilling” effect on innovation…(More)”.

Data Spaces: Design, Deployment and Future Directions


Open access book edited by Edward Curry, Simon Scerri, and Tuomo Tuikka: “…aims to educate data space designers to understand what is required to create a successful data space. It explores cutting-edge theory, technologies, methodologies, and best practices for data spaces for both industrial and personal data and provides the reader with a basis for understanding the design, deployment, and future directions of data spaces.

The book captures the early lessons and experience in creating data spaces. It arranges these contributions into three parts covering design, deployment, and future directions respectively.

  • The first part explores the design space of data spaces. The single chapters detail the organisational design for data spaces, data platforms, data governance federated learning, personal data sharing, data marketplaces, and hybrid artificial intelligence for data spaces.
  • The second part describes the use of data spaces within real-world deployments. Its chapters are co-authored with industry experts and include case studies of data spaces in sectors including industry 4.0, food safety, FinTech, health care, and energy.
  • The third and final part details future directions for data spaces, including challenges and opportunities for common European data spaces and privacy-preserving techniques for trustworthy data sharing…(More)”.

Towards a permanent citizens’ participatory mechanism in the EU


Report by Alberto Alemanno: “This study, commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of the AFCO Committee, examines the EU participatory system and its existing participatory channels against mounting citizens’ expectations for greater participation in EU decision making in the aftermath of the Conference on the Future of Europe. It proposes the creation of a permanent deliberative mechanism entailing the participation of randomly selected citizens tasked to provide advice upon some of the proposals originating from either existing participation channels or the EU institutions, in an attempt at making the EU more democratically responsive…(More)”

Using real-time indicators for economic decision-making in government: Lessons from the Covid-19 crisis in the UK


Paper by David Rosenfeld: “When the UK went into lockdown in mid-March 2020, government was faced with the dual challenge of managing the impact of closing down large parts of the economy and responding effectively to the pandemic. Policy-makers needed to make rapid decisions regarding, on the one hand, the extent of restrictions on movement and economic activity to limit the spread of the virus, and on the other, the amount of support that would be provided to individuals and businesses affected by the crisis. Traditional, official statistics, such as gross domestic product (GDP) or unemployment, which get released on a monthly basis and with a lag, could not be relied upon to monitor the situation and guide policy decisions.

In response, teams of data scientists and statisticians pivoted to develop alternative indicators, leading to an unprecedented amount of innovation in how statistics and data were used in government. This ranged from monitoring sewage water for signs of Covid-19 infection to the Office for National Statistics (ONS) developing a new range of ‘faster indicators’ of economic activity using online job vacancies and data on debit and credit card expenditure from the Clearing House Automated Payment System (CHAPS).

The ONS received generally positive reviews for its performance during the crisis (The Economist, 2022), in contrast to the 2008 financial crisis when policy-makers did not realise the extent of the recession until subsequent revisions to GDP estimates were made. Partly in response to this, the Independent Review of UK Economic Statistics (HM Treasury, 2016) recommended improvements to the use of administrative data and alternative indicators as well as to data science capability to exploit both the extra granularity and the timeliness of new data sources.

This paper reviews the elements that contributed to successes in using real-time data during the pandemic as well as the challenges faced during this period, with a view to distilling some lessons for future use in government. Section 2 provides an overview of real-time indicators (RTIs) and how they were used in the UK during the Covid-19 crisis. The next sections analyse the factors that underpinned the successes (or lack thereof) in using such indicators: section 3 addresses skills, section 4 infrastructure, and section 5 legal frameworks and processes. Section 6 concludes with a summary of the main lessons for governments that hope to make greater use of RTIs…(More)”.