Thousands of U.S. Government Web Pages Have Been Taken Down Since Friday


Article by Ethan Singer: “More than 8,000 web pages across more than a dozen U.S. government websites have been taken down since Friday afternoon, a New York Times analysis has found, as federal agencies rush to heed President Trump’s orders targeting diversity initiatives and “gender ideology.”

The purges have removed information about vaccines, veterans’ care, hate crimes and scientific research, among many other topics. Doctors, researchers and other professionals often rely on such government data and advisories. Some government agencies appear to have removed entire sections of their websites, while others are missing only a handful of pages.

Among the pages that have been taken down:

(The links are to archived versions.)

Developing a theory of robust democracy


Paper by Eva Sørensen and Mark E. Warren: “While many democratic theorists recognise the necessity of reforming liberal democracies to keep pace with social change, they rarely consider what enables such reform. In this conceptual article, we suggest that liberal democracies are politically robust when they are able to continuously adapt and innovate how they operate when doing so is necessary to continue to serve key democratic functions. These functions include securing the empowered inclusion of those affected, collective agenda setting and will formation, and the making of joint decisions. Three current challenges highlight the urgency of adapting and innovating liberal democracies to become more politically robust: an increasingly assertive political culture, the digitalisation of political communication and increasing global interdependencies. A democratic theory of political robustness emphasises the need to strengthen the capacity of liberal democracies to adapt and innovate in response to changes, just as it helps to frame the necessary adaptations and innovations in times such as the present…(More)”

Establish data collaboratives to foster meaningful public involvement


Article by Gwen Ottinger: “…Data Collaboratives would move public participation and community engagement upstream in the policy process by creating opportunities for community members to contribute their lived experience to the assessment of data and the framing of policy problems. This would in turn foster two-way communication and trusting relationships between government and the public. Data Collaboratives would also help ensure that data and their uses in federal government are equitable, by inviting a broader range of perspectives on how data analysis can promote equity and where relevant data are missing. Finally, Data Collaboratives would be one vehicle for enabling individuals to participate in science, technology, engineering, math, and medicine activities throughout their lives, increasing the quality of American science and the competitiveness of American industry…(More)”.

Experts warn about the ‘crumbling infrastructure’ of federal government data


Article by Hansi Lo Wang: “The stability of the federal government’s system for producing statistics, which the U.S. relies on to understand its population and economy, is under threat because of budget concerns, officials and data users warn.

And that’s before any follow-through on the new Trump administration and Republican lawmakers‘ pledges to slash government spending, which could further affect data production.

In recent months, budget shortfalls and the restrictions of short-term funding have led to the end of some datasets by the Bureau of Economic Analysis, known for its tracking of the gross domestic product, and to proposals by the Bureau of Labor Statistics to reduce the number of participants surveyed to produce the monthly jobs report. A “lack of multiyear funding” has also hurt efforts to modernize the software and other technology the BLS needs to put out its data properly, concluded a report by an expert panel tasked with examining multiple botched data releases last year.

Long-term funding questions are also dogging the Census Bureau, which carries out many of the federal government’s surveys and is preparing for the 2030 head count that’s set to be used to redistribute political representation and trillions in public funding across the country. Some census watchers are concerned budget issues may force the bureau to cancel some of its field tests for the upcoming tally, as it did with 2020 census tests for improving the counts in Spanish-speaking communities, rural areas and on Indigenous reservations.

While the statistical agencies have not been named specifically, some advocates are worried that calls to reduce the federal government’s workforce by President Trump and the new Republican-controlled Congress could put the integrity of the country’s data at greater risk…(More)”

Problems of participatory processes in policymaking: a service design approach


Paper by Susana Díez-Calvo, Iván Lidón, Rubén Rebollar, Ignacio Gil-Pérez: “This study aims to identify and map the problems of participatory processes in policymaking through a Service Design approach….Fifteen problems of participatory processes in policymaking were identified, and some differences were observed in the perception of these problems between the stakeholders responsible for designing and implementing the participatory processes (backstage stakeholders) and those who are called upon to participate (frontstage stakeholders). The problems were found to occur at different stages of the service and to affect different stakeholders. A number of design actions were proposed to help mitigate these problems from a human-centred approach. These included process improvements, digital opportunities, new technologies and staff training, among others…(More)”.

Developing a Framework for Collective Data Rights


Report by Jeni Tennison: “Are collective data rights really necessary? Or, do people and communities already have sufficient rights to address harms through equality, public administration or consumer law? Might collective data rights even be harmful by undermining individual data rights or creating unjust collectivities? If we did have collective data rights, what should they look like? And how could they be introduced into legislation?

Data protection law and policy are founded on the notion of individual notice and consent, originating from the handling of personal data gathered for medical and scientific research. However, recent work on data governance has highlighted shortcomings with the notice-and-consent approach, especially in an age of big data and artificial intelligence. This special reports considers the need for collective data rights by examining legal remedies currently available in the United Kingdom in three scenarios where the people affected by algorithmic decision making are not data subjects and therefore do not have individual data protection rights…(More)”.

You Be the Judge: How Taobao Crowdsourced Its Courts


Excerpt from Lizhi Liu’s new book, “From Click to Boom”: “When disputes occur, Taobao encourages buyers and sellers to negotiate with each other first. If the feuding parties cannot reach an agreement and do not want to go to court, they can use one of Taobao’s two judicial channels: asking a Taobao employee to adjudicate or using an online jury panel to arbitrate. This section discusses the second channel, a unique Chinese institutional innovation.

Alibaba’s Public Jury was established in 2012 to crowdsource justice. It uses a Western-style jury-voting mechanism to solve online disputes and controversial issues. These jurors are termed “public assessors” by Taobao. Interestingly, the name “public assessor” was drawn from the Chinese talent show “Super Girl” (similar to “American Idol”), which, after the authority shut down its mass voting system, transitioned to using a small panel of audience representatives (or “public assessors”) to vote for the show’s winner. The public jury was widely used by the main Taobao site by 2020 and is now frequently used by Xianyu, Taobao’s used-goods market.

Why did Taobao introduce the jury system? Certainly, as Taobao expanded, the volume of online disputes surged, posing challenges for the platform to handle all disputes by itself. However, according to a former platform employee responsible for designing this institution, the primary motivation was not the caseload. Instead, it was propelled by the complexity of online disputes that proved challenging for the platform to resolve alone. Consequently, they opted to involve users in adjudicating these cases to ensure a fairer process rather than solely relying on platform intervention.

To form a jury, Taobao randomly chooses each panel of 13 jurors from 4 million volunteer candidates; each juror may participate in up to 40 cases per day. The candidate needs to be an experienced Taobao user (i.e., those who have registered for more than a year) with a good online reputation (i.e., those who have a sufficiently high credit rating, as discussed below). This requirement is high enough to prevent most dishonest traders from manipulating votes, but low enough to be inclusive and keep the juror pool large. These jurors are unpaid yet motivated to participate. They gain experience points that can translate into different virtual titles or that can be donated to charity by Taobao as real money…(More)”

Un-Plateauing Corruption Research?Perhaps less necessary, but more exciting than one might think


Article by Dieter Zinnbauer: “There is a sense in the anti-corruption research community that we may have reached some plateau (or less politely, hit a wall). This article argues – at least partly – against this claim.

We may have reached a plateau with regard to some recurring (staid?) scholarly and policy debates that resurface with eerie regularity, tend to suck all oxygen out of the room, yet remain essentially unsettled and irresolvable. Questions aimed at arriving closure on what constitutes corruption, passing authoritative judgements  on what works and what does not and rather grand pronouncements on whether progress has or has not been all fall into this category.

 At the same time, there is exciting work often in unexpected places outside the inner ward of the anti-corruption castle,  contributing new approaches and fresh-ish insights and there are promising leads for exciting research on the horizon. Such areas include the underappreciated idiosyncrasies of corruption in the form of inaction rather than action, the use of satellites and remote sensing techniques to better understand and measure corruption, the overlooked role of short-sellers in tackling complex forms of corporate corruption and the growing phenomena of integrity capture, the anti-corruption apparatus co-opted for sinister, corrupt purposes.

These are just four examples of the colourful opportunity tapestry for (anti)corruption research moving forward, not in form of a great unified project and overarching new idea  but as little stabs of potentiality here and  there and somewhere else surprisingly unbeknownst…(More)”

Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models


Article by Yaqub Chaudhary and Jonnie Penn: “The rapid proliferation of large language models (LLMs) invites the possibility of a new marketplace for behavioral and psychological data that signals intent. This brief article introduces some initial features of that emerging marketplace. We survey recent efforts by tech executives to position the capture, manipulation, and commodification of human intentionality as a lucrative parallel to—and viable extension of—the now-dominant attention economy, which has bent consumer, civic, and media norms around users’ finite attention spans since the 1990s. We call this follow-on the intention economy. We characterize it in two ways. First, as a competition, initially, between established tech players armed with the infrastructural and data capacities needed to vie for first-mover advantage on a new frontier of persuasive technologies. Second, as a commodification of hitherto unreachable levels of explicit and implicit data that signal intent, namely those signals borne of combining (a) hyper-personalized manipulation via LLM-based sycophancy, ingratiation, and emotional infiltration and (b) increasingly detailed categorization of online activity elicited through natural language.

This new dimension of automated persuasion draws on the unique capabilities of LLMs and generative AI more broadly, which intervene not only on what users want, but also, to cite Williams, “what they want to want” (Williams, 2018, p. 122). We demonstrate through a close reading of recent technical and critical literature (including unpublished papers from ArXiv) that such tools are already being explored to elicit, infer, collect, record, understand, forecast, and ultimately manipulate, modulate, and commodify human plans and purposes, both mundane (e.g., selecting a hotel) and profound (e.g., selecting a political candidate)…(More)”.

Good government data requires good statistics officials – but how motivated and competent are they?


Worldbank Blog: “Government data is only as reliable as the statistics officials who produce it. Yet, surprisingly little is known about these officials themselves. For decades, they have diligently collected data on others –  such as households and firms – to generate official statistics, from poverty rates to inflation figures. Yet, data about statistics officials themselves is missing. How competent are they at analyzing statistical data? How motivated are they to excel in their roles? Do they uphold integrity when producing official statistics, even in the face of opposing career incentives or political pressures? And what can National Statistical Offices (NSOs) do to cultivate a workforce that is competent, motivated, and ethical?

We surveyed 13,300 statistics officials in 14 countries in Latin America and the Caribbean to find out. Five results stand out. For further insights, consult our Inter-American Development Bank (IDB) report, Making National Statistical Offices Work Better.

1. The competence and management of statistics officials shape the quality of statistical data

Our survey included a short exam assessing basic statistical competencies, such as descriptive statistics and probability. Statistical competence correlates with data quality: NSOs with higher exam scores among employees tend to achieve better results in the World Bank’s Statistical Performance Indicators (r = 0.36).

NSOs with better management practices also have better statistical performance. For instance, NSOs with more robust recruitment and selection processes have better statistical performance (r = 0.62)…(More)”.