We Need To Rewild The Internet


Article by Maria Farrell and Robin Berjon: “In the late 18th century, officials in Prussia and Saxony began to rearrange their complex, diverse forests into straight rows of single-species trees. Forests had been sources of food, grazing, shelter, medicine, bedding and more for the people who lived in and around them, but to the early modern state, they were simply a source of timber.

So-called “scientific forestry” was that century’s growth hacking. It made timber yields easier to count, predict and harvest, and meant owners no longer relied on skilled local foresters to manage forests. They were replaced with lower-skilled laborers following basic algorithmic instructions to keep the monocrop tidy, the understory bare.

Information and decision-making power now flowed straight to the top. Decades later when the first crop was felled, vast fortunes were made, tree by standardized tree. The clear-felled forests were replanted, with hopes of extending the boom. Readers of the American political anthropologist of anarchy and order, James C. Scott, know what happened next.

It was a disaster so bad that a new word, Waldsterben, or “forest death,” was minted to describe the result. All the same species and age, the trees were flattened in storms, ravaged by insects and disease — even the survivors were spindly and weak. Forests were now so tidy and bare, they were all but dead. The first magnificent bounty had not been the beginning of endless riches, but a one-off harvesting of millennia of soil wealth built up by biodiversity and symbiosis. Complexity was the goose that laid golden eggs, and she had been slaughtered…(More)”.

On the Manipulation of Information by Governments


Paper by Ariel Karlinsky and Moses Shayo: “Governmental information manipulation has been hard to measure and study systematically. We hand-collect data from official and unofficial sources in 134 countries to estimate misreporting of Covid mortality during 2020-21. We find that between 45%–55% of governments misreported the number of deaths. The lion’s share of misreporting cannot be attributed to a country’s capacity to accurately diagnose and report deaths. Contrary to some theoretical expectations, there is little evidence of governments exaggerating the severity of the pandemic. Misreporting is higher where governments face few social and institutional constraints, in countries holding elections, and in countries with a communist legacy…(More)”

Democracy and Artificial Intelligence: old problems, new solutions?


Discussion between Nardine Alnemr and Rob Weymouth: “…I see three big perspectives relevant to AI and democracy. You have the most conservative, mirroring the 80s and the 90s, still talking about the digital public sphere as if it’s distant from our lives. As if it’s something novel and inaccessible, which is not quite accurate anymore.

Then there’s the more optimistic and cautionary side of the spectrum. People who are excited about the technologies, but they’re not quite sure. They’re intrigued to see the potential and I think they’re optimistic because they overlook how these technologies connect to a broader context. How a lot of these technologies are driven by surveying and surveillance of the data and the communication that we produce. Exploitation of workers who do the filtering and cleaning work. The companies that profit out of this and make engineered election campaigns. So they’re cautious because of that, but still optimistic, because at the same time, they try to isolate it from that bigger context.

And finally, the most radical is something like Cesar Hidalgo’s proposal of augmented democracy…(More)”.

Global Contract-level Public Procurement Dataset


Paper by Mihály Fazekas et al: “One-third of total government spending across the globe goes to public procurement, amounting to about 10 trillion dollars a year. Despite its vast size and crucial importance for economic and political developments, there is a lack of globally comparable data on contract awards and tenders run. To fill this gap, this article introduces the Global Public Procurement Dataset (GPPD). Using web scraping methods, we collected official public procurement data on over 72 million contracts from 42 countries between 2006 and 2021 (time period covered varies by country due to data availability constraints). To overcome the inconsistency of data publishing formats in each country, we standardized the published information to fit a common data standard. For each country, key information is collected on the buyer(s) and supplier(s), geolocation information, product classification, price information, and details of the contracting process such as contract award date or the procedure type followed. GPPD is a contract-level dataset where specific filters are calculated allowing to reduce the dataset to the successfully awarded contracts if needed. We also add several corruption risk indicators and a composite corruption risk index for each contract which allows for an objective assessment of risks and comparison across time, organizations, or countries. The data can be reused to answer research questions dealing with public procurement spending efficiency among others. Using unique organizational identification numbers or organization names allows connecting the data to company registries to study broader topics such as ownership networks…(More)”.

The End of the Policy Analyst? Testing the Capability of Artificial Intelligence to Generate Plausible, Persuasive, and Useful Policy Analysis


Article by Mehrdad Safaei and Justin Longo: “Policy advising in government centers on the analysis of public problems and the developing of recommendations for dealing with them. In carrying out this work, policy analysts consult a variety of sources and work to synthesize that body of evidence into useful decision support documents commonly called briefing notes. Advances in natural language processing (NLP) have led to the continuing development of tools that can undertake a similar task. Given a brief prompt, a large language model (LLM) can synthesize information in content databases. This article documents the findings from an experiment that tested whether contemporary NLP technology is capable of producing public policy relevant briefing notes that expert evaluators judge to be useful. The research involved two stages. First, briefing notes were created using three models: NLP generated; human generated; and NLP generated/human edited. Next, two panels of retired senior public servants (with only one panel informed of the use of NLP in the experiment) were asked to judge the briefing notes using a heuristic evaluation rubric. The findings indicate that contemporary NLP tools were not able to, on their own, generate useful policy briefings. However, the feedback from the expert evaluators indicates that automatically generated briefing notes might serve as a useful supplement to the work of human policy analysts. And the speed with which the capabilities of NLP tools are developing, supplemented with access to a larger corpus of previously prepared policy briefings and other policy-relevant material, suggests that the quality of automatically generated briefings may improve significantly in the coming years. The article concludes with reflections on what such improvements might mean for the future practice of policy analysis…(More)”.

The CFPB wants to rein in data brokers


Article by Gaby Del Valle: “The Consumer Financial Protection Bureau wants to propose new regulations that would require data brokers to comply with the Fair Credit Reporting Act. In a speech at the White House earlier this month, CFPB Director Rohit Chopra said the agency is looking into policies to “ensure greater accountability” for companies that buy and sell consumer data, in keeping with an executive order President Joe Biden issued in late February.

Chopra said the agency is considering proposals that would define data brokers that sell certain types of data as “consumer reporting agencies,” thereby requiring those companies to comply with the Fair Credit Reporting Act (FCRA). The statute bans sharing certain kinds of data (e.g., your credit report) with entities unless they serve a specific purpose outlined in the law (e.g., if the report is used for employment purposes or to extend a line of credit to someone).

The CFBP views the buying and selling of consumer data as a national security issue, not just a matter of privacy. Chopra mentioned three massive data breaches — the 2015 Anthem leak, the 2017 Equifax hack, and the 2018 Marriott breach — as examples of foreign adversaries illicitly obtaining Americans’ personal data. “When Americans’ health information, financial information, and even their travel whereabouts can be assembled into detailed dossiers, it’s no surprise that this raises risks when it comes to safety and security,” Chopra said. But the focus on high-profile hacks obscures a more pervasive, totally legal phenomenon: data brokers’ ability to sell detailed personal information to anyone who’s willing to pay for it…(More)”.

Strategies, missions and the challenge of whole of government action


Paper by Geoff Mulgan: “Every government is, in reality, a flotilla of many departments, agencies, tiers rather than a single thing.  But all aspire to greater coherence. ‘Whole of government’ approaches – that mobilise and align many ministries and agencies around a common challenge – have a long history: during major wars, and around attempts to digitize societies, to cut energy use, to reduce poverty and to respond to the COVID-19 pandemic. These have been described using different terms – national plans, priorities, strategies and missions – but the issues are similar.

This paper, linked to a European Commission programme on ‘whole of government innovation’ (launching on 16 April in Brussels) looks at the lessons of history and options for the future.  Its primary focus is on innovation, but the issues apply more widely. The paper outlines the tools governments can use to achieve cross-cutting goals, from strategic roles to matrix models, cross-cutting budgets, teams, targets and processes, to options for linking law, regulation and procurement. It looks at partnerships and other structures for organising collaboration with business, universities and civil society; and at the role of public engagement…(More)”.

The generation of public value through e-participation initiatives: A synthesis of the extant literature


Paper by Naci Karkin and Asunur Cezar: “The number of studies evaluating e-participation levels in e-government services has recently increased. These studies primarily examine stakeholders’ acceptance and adoption of e-government initiatives. However, it is equally important to understand whether and how value is generated through e-participation, regardless of whether the focus is on government efforts or user adoption/acceptance levels. There is a need in the literature for a synthesis focusing on e- participation’s connection with public value creation using a systematic and comprehensive approach. This study employs a systematic literature review to collect, examine, and synthesize prior findings, aiming to investigate public value creation through e-participation initiatives, including their facilitators and barriers. By reviewing sixty-four peer-reviewed studies indexed by Web of Science and Scopus, this research demonstrates that e-participation initiatives and efforts can generate public value. Nevertheless, several factors are pivotal for the success and sustainability of these initiatives. The study’s findings could guide researchers and practitioners in comprehending the determinants and barriers influencing the success and sustainability of e-participation initiatives in the public value creation process while highlighting potential future research opportunities in this domain…(More)”.

How Belgium is Giving Citizens a Say on AI


Article by Graham Wetherall-Grujić: “A few weeks before the European Parliament’s final debate on the AI Act, 60 randomly selected members of the Belgian public convened in Brussels for a discussion of their own. The aim was not to debate a particular piece of legislation, but to help shape a European vision on the future of AI, drawing on the views, concerns, and ideas of the public. 

They were taking part in a citizens’ assembly on AI, held as part of Belgium’s presidency of the European Council. When Belgium assumed the presidency for six months beginning in January 2024, they announced they would be placing “special focus” on citizens’ participation. The citizen panel on AI is the largest of the scheduled participation projects. Over a total of three weekends, participants are deliberating on a range of topics including the impact of AI on work, education, and democracy. 

The assembly comes at a point in time with rising calls for more public inputs on the topic of AI. Some big tech firms have begun to respond with participation projects of their own. But this is the first time an EU institution has launched a consultation on the topic. The organisers hope it will pave the way for more to come…(More)”.

AI-driven public services and the privacy paradox: do citizens really care about their privacy?


Paper by Based on privacy calculus theory, we derive hypotheses on the role of perceived usefulness and privacy risks of artificial intelligence (AI) in public services. In a representative vignette experiment (n = 1,048), we asked citizens whether they would download a mobile app to interact in an AI-driven public service. Despite general concerns about privacy, we find that citizens are not susceptible to the amount of personal information they must share, nor to a more anthropomorphic interface. Our results confirm the privacy paradox, which we frame in the literature on the government’s role to safeguard ethical principles, including citizens’ privacy…(More)”.