Paper by Mihály Fazekas et al: “One-third of total government spending across the globe goes to public procurement, amounting to about 10 trillion dollars a year. Despite its vast size and crucial importance for economic and political developments, there is a lack of globally comparable data on contract awards and tenders run. To fill this gap, this article introduces the Global Public Procurement Dataset (GPPD). Using web scraping methods, we collected official public procurement data on over 72 million contracts from 42 countries between 2006 and 2021 (time period covered varies by country due to data availability constraints). To overcome the inconsistency of data publishing formats in each country, we standardized the published information to fit a common data standard. For each country, key information is collected on the buyer(s) and supplier(s), geolocation information, product classification, price information, and details of the contracting process such as contract award date or the procedure type followed. GPPD is a contract-level dataset where specific filters are calculated allowing to reduce the dataset to the successfully awarded contracts if needed. We also add several corruption risk indicators and a composite corruption risk index for each contract which allows for an objective assessment of risks and comparison across time, organizations, or countries. The data can be reused to answer research questions dealing with public procurement spending efficiency among others. Using unique organizational identification numbers or organization names allows connecting the data to company registries to study broader topics such as ownership networks…(More)”.
The End of the Policy Analyst? Testing the Capability of Artificial Intelligence to Generate Plausible, Persuasive, and Useful Policy Analysis
Article by Mehrdad Safaei and Justin Longo: “Policy advising in government centers on the analysis of public problems and the developing of recommendations for dealing with them. In carrying out this work, policy analysts consult a variety of sources and work to synthesize that body of evidence into useful decision support documents commonly called briefing notes. Advances in natural language processing (NLP) have led to the continuing development of tools that can undertake a similar task. Given a brief prompt, a large language model (LLM) can synthesize information in content databases. This article documents the findings from an experiment that tested whether contemporary NLP technology is capable of producing public policy relevant briefing notes that expert evaluators judge to be useful. The research involved two stages. First, briefing notes were created using three models: NLP generated; human generated; and NLP generated/human edited. Next, two panels of retired senior public servants (with only one panel informed of the use of NLP in the experiment) were asked to judge the briefing notes using a heuristic evaluation rubric. The findings indicate that contemporary NLP tools were not able to, on their own, generate useful policy briefings. However, the feedback from the expert evaluators indicates that automatically generated briefing notes might serve as a useful supplement to the work of human policy analysts. And the speed with which the capabilities of NLP tools are developing, supplemented with access to a larger corpus of previously prepared policy briefings and other policy-relevant material, suggests that the quality of automatically generated briefings may improve significantly in the coming years. The article concludes with reflections on what such improvements might mean for the future practice of policy analysis…(More)”.
The CFPB wants to rein in data brokers
Article by Gaby Del Valle: “The Consumer Financial Protection Bureau wants to propose new regulations that would require data brokers to comply with the Fair Credit Reporting Act. In a speech at the White House earlier this month, CFPB Director Rohit Chopra said the agency is looking into policies to “ensure greater accountability” for companies that buy and sell consumer data, in keeping with an executive order President Joe Biden issued in late February.
Chopra said the agency is considering proposals that would define data brokers that sell certain types of data as “consumer reporting agencies,” thereby requiring those companies to comply with the Fair Credit Reporting Act (FCRA). The statute bans sharing certain kinds of data (e.g., your credit report) with entities unless they serve a specific purpose outlined in the law (e.g., if the report is used for employment purposes or to extend a line of credit to someone).
The CFBP views the buying and selling of consumer data as a national security issue, not just a matter of privacy. Chopra mentioned three massive data breaches — the 2015 Anthem leak, the 2017 Equifax hack, and the 2018 Marriott breach — as examples of foreign adversaries illicitly obtaining Americans’ personal data. “When Americans’ health information, financial information, and even their travel whereabouts can be assembled into detailed dossiers, it’s no surprise that this raises risks when it comes to safety and security,” Chopra said. But the focus on high-profile hacks obscures a more pervasive, totally legal phenomenon: data brokers’ ability to sell detailed personal information to anyone who’s willing to pay for it…(More)”.
Strategies, missions and the challenge of whole of government action
Paper by Geoff Mulgan: “Every government is, in reality, a flotilla of many departments, agencies, tiers rather than a single thing. But all aspire to greater coherence. ‘Whole of government’ approaches – that mobilise and align many ministries and agencies around a common challenge – have a long history: during major wars, and around attempts to digitize societies, to cut energy use, to reduce poverty and to respond to the COVID-19 pandemic. These have been described using different terms – national plans, priorities, strategies and missions – but the issues are similar.
This paper, linked to a European Commission programme on ‘whole of government innovation’ (launching on 16 April in Brussels) looks at the lessons of history and options for the future. Its primary focus is on innovation, but the issues apply more widely. The paper outlines the tools governments can use to achieve cross-cutting goals, from strategic roles to matrix models, cross-cutting budgets, teams, targets and processes, to options for linking law, regulation and procurement. It looks at partnerships and other structures for organising collaboration with business, universities and civil society; and at the role of public engagement…(More)”.
The generation of public value through e-participation initiatives: A synthesis of the extant literature
Paper by Naci Karkin and Asunur Cezar: “The number of studies evaluating e-participation levels in e-government services has recently increased. These studies primarily examine stakeholders’ acceptance and adoption of e-government initiatives. However, it is equally important to understand whether and how value is generated through e-participation, regardless of whether the focus is on government efforts or user adoption/acceptance levels. There is a need in the literature for a synthesis focusing on e- participation’s connection with public value creation using a systematic and comprehensive approach. This study employs a systematic literature review to collect, examine, and synthesize prior findings, aiming to investigate public value creation through e-participation initiatives, including their facilitators and barriers. By reviewing sixty-four peer-reviewed studies indexed by Web of Science and Scopus, this research demonstrates that e-participation initiatives and efforts can generate public value. Nevertheless, several factors are pivotal for the success and sustainability of these initiatives. The study’s findings could guide researchers and practitioners in comprehending the determinants and barriers influencing the success and sustainability of e-participation initiatives in the public value creation process while highlighting potential future research opportunities in this domain…(More)”.
How Belgium is Giving Citizens a Say on AI
Article by Graham Wetherall-Grujić: “A few weeks before the European Parliament’s final debate on the AI Act, 60 randomly selected members of the Belgian public convened in Brussels for a discussion of their own. The aim was not to debate a particular piece of legislation, but to help shape a European vision on the future of AI, drawing on the views, concerns, and ideas of the public.
They were taking part in a citizens’ assembly on AI, held as part of Belgium’s presidency of the European Council. When Belgium assumed the presidency for six months beginning in January 2024, they announced they would be placing “special focus” on citizens’ participation. The citizen panel on AI is the largest of the scheduled participation projects. Over a total of three weekends, participants are deliberating on a range of topics including the impact of AI on work, education, and democracy.
The assembly comes at a point in time with rising calls for more public inputs on the topic of AI. Some big tech firms have begun to respond with participation projects of their own. But this is the first time an EU institution has launched a consultation on the topic. The organisers hope it will pave the way for more to come…(More)”.
AI-driven public services and the privacy paradox: do citizens really care about their privacy?
Paper by Based on privacy calculus theory, we derive hypotheses on the role of perceived usefulness and privacy risks of artificial intelligence (AI) in public services. In a representative vignette experiment (n = 1,048), we asked citizens whether they would download a mobile app to interact in an AI-driven public service. Despite general concerns about privacy, we find that citizens are not susceptible to the amount of personal information they must share, nor to a more anthropomorphic interface. Our results confirm the privacy paradox, which we frame in the literature on the government’s role to safeguard ethical principles, including citizens’ privacy…(More)”.
Social Movements and Public Opinion in the United States
Paper by Amory Gethin & Vincent Pons: “Recent social movements stand out by their spontaneous nature and lack of stable leadership, raising doubts on their ability to generate political change. This article provides systematic evidence on the effects of protests on public opinion and political attitudes. Drawing on a database covering the quasi-universe of protests held in the United States, we identify 14 social movements that took place from 2017 to 2022, covering topics related to environmental protection, gender equality, gun control, immigration, national and international politics, and racial issues. We use Twitter data, Google search volumes, and high-frequency surveys to track the evolution of online interest, policy views, and vote intentions before and after the outset of each movement. Combining national-level event studies with difference-in-differences designs exploiting variation in local protest intensity, we find that protests generate substantial internet activity but have limited effects on political attitudes. Except for the Black Lives Matter protests following the death of George Floyd, which shifted views on racial discrimination and increased votes for the Democrats, we estimate precise null effects of protests on public opinion and electoral behavior…(More)”.
Could artificial intelligence benefit democracy?
Article by Brian Wheeler: “Each week sees a new set of warnings about the potential impact of AI-generated deepfakes – realistic video and audio of politicians saying things they never said – spreading confusion and mistrust among the voting public.
And in the UK, regulators, security services and government are battling to protect this year’s general election from malign foreign interference.
Less attention has been given to the possible benefits of AI.
But a lot of work is going on, often below the radar, to try to harness its power in ways that might enhance democracy rather than destroy it.
“While this technology does pose some important risks in terms of disinformation, it also offers some significant opportunities for campaigns, which we can’t ignore,” Hannah O’Rourke, co-founder of Campaign Lab, a left-leaning network of tech volunteers, says.
“Like all technology, what matters is how AI is actually implemented. “Its impact will be felt in the way campaigners actually use it.”
Among other things, Campaign Lab runs training courses for Labour and Liberal Democrat campaigners on how to use ChatGPT (Chat Generative Pre-trained Transformer) to create the first draft of election leaflets.
It reminds them to edit the final product carefully, though, as large language models (LLMs) such as ChatGPT have a worrying tendency to “hallucinate” or make things up.
The group is also experimenting with chatbots to help train canvassers to have more engaging conversations on the doorstep.
AI is already embedded in everyday programs, from Microsoft Outlook to Adobe Photoshop, Ms O’Rourke says, so why not use it in a responsible way to free up time for more face-to-face campaigning?…
Conservative-supporting AI expert Joe Reeve is another young political campaigner convinced the new technology can transform things for the better.
He runs Future London, a community of “techno optimists” who use AI to seek answers to big questions such as “Why can’t I buy a house?” and, crucially, “Where’s my robot butler?”
In 2020, Mr Reeve founded Tory Techs, partly as a right-wing response to Campaign Lab.
The group has run programming sessions and explored how to use AI to hone Tory campaign messages but, Mr Reeve says, it now “mostly focuses on speaking with MPs in more private and safe spaces to help coach politicians on what AI means and how it can be a positive force”.
“Technology has an opportunity to make the world a lot better for a lot of people and that is regardless of politics,” he tells BBC News…(More)”.
Synthetic Politics: Preparing democracy for Generative AI
Report by Demos: “This year is a politically momentous one, with almost half the world voting in elections. Generative AI may revolutionise our political information environments by making them more effective, relevant, and participatory. But it’s also possible that they will become more manipulative, confusing, and dangerous. We’ve already seen AI-generated audio of politicians going viral and chatbots offering incorrect information about elections.
This report, produced in partnership with University College London, explores how synthetic content produced by generative AI poses risks to the core democratic values of truth, equality, and non-violence. It proposes two action plans for what private and public decision-makers should be doing to safeguard democratic integrity immediately and in the long run:
- In Action Plan 1, we consider the actions that should be urgently put in place to reduce the acute risks to democratic integrity presented by generative AI tools. This includes reducing the production and dissemination of harmful synthetic content and empowering users so that harmful impacts of synthetic content are reduced in the immediate term.
- In Action Plan 2, we set out a longer-term vision for how the fundamental risks to democratic integrity should be addressed. We explore the ways in which generative AI tools can help bolster equality, truth and non-violence, from enabling greater democratic participation to improving how key information institutions operate…(More)”.