German lawmakers mull creating first citizen assembly


APNews: “German lawmakers considered Wednesday whether to create the country’s first “citizen assembly’” to advise parliament on the issue of food and nutrition.

Germany’s three governing parties back the idea of appointing consultative bodies made up of members of the public selected through a lottery system who would discuss specific topics and provide nonbinding feedback to legislators. But opposition parties have rejected the idea, warning that such citizen assemblies risk undermining the primacy of parliament in Germany’s political system.

Baerbel Bas, the speaker of the lower house, or Bundestag, said that she views such bodies as a “bridge between citizens and politicians that can provide a fresh perspective and create new confidence in established institutions.”

“Everyone should be able to have a say,” Bas told daily Passauer Neue Presse. “We want to better reflect the diversity in our society.”

Environmental activists from the group Last Generation have campaigned for the creation of a citizen assembly to address issues surrounding climate change. However, the group argues that proposals drawn up by such a body should at the very least result in bills that lawmakers would then vote on.

Similar efforts to create citizen assemblies have taken place in other European countries such as Spain, Finland, Austria, Britain and Ireland…(More)”.

Misunderstanding Misinformation


Article by Claire Wardle: “In the fall of 2017, Collins Dictionary named fake news word of the year. It was hard to argue with the decision. Journalists were using the phrase to raise awareness of false and misleading information online. Academics had started publishing copiously on the subject and even named conferences after it. And of course, US president Donald Trump regularly used the epithet from the podium to discredit nearly anything he disliked.

By spring of that year, I had already become exasperated by how this term was being used to attack the news media. Worse, it had never captured the problem: most content wasn’t actually fake, but genuine content used out of context—and only rarely did it look like news. I made a rallying cry to stop using fake news and instead use misinformationdisinformation, and malinformation under the umbrella term information disorder. These terms, especially the first two, have caught on, but they represent an overly simple, tidy framework I no longer find useful.

Both disinformation and misinformation describe false or misleading claims, but disinformation is distributed with the intent to cause harm, whereas misinformation is the mistaken sharing of the same content. Analyses of both generally focus on whether a post is accurate and whether it is intended to mislead. The result? We researchers become so obsessed with labeling the dots that we can’t see the larger pattern they show.

By focusing narrowly on problematic content, researchers are failing to understand the increasingly sizable number of people who create and share this content, and also overlooking the larger context of what information people actually need. Academics are not going to effectively strengthen the information ecosystem until we shift our perspective from classifying every post to understanding the social contexts of this information, how it fits into narratives and identities, and its short-term impacts and long-term harms…(More)”.

Will A.I. Become the New McKinsey?


Essay by Ted Chiang: “When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, it’s become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.

So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America…(More)”.

Spamming democracy


Article by Natalie Alms: “The White House’s Office of Information and Regulatory Affairs is considering AI’s effect in the regulatory process, including the potential for generative chatbots to fuel mass campaigns or inject spam comments into the federal agency rulemaking process.

A recent executive order directed the office to consider using guidance or tools to address mass comments, computer-generated comments and falsely attributed comments, something an administration official told FCW that OIRA is “moving forward” on.

Mark Febrezio, a senior policy analyst at George Washington University’s Regulatory Studies Center, has experimented with Open AI’s generative AI system ChatGPT to create what he called a “convincing” public comment submission to a Labor Department proposal. 

“Generative AI also takes the possibility of mass and malattributed comments to the next level,” wrote Fabrizio and co-author Bridget Dooling, research professor at the center, in a paper published in April by the Brookings Institution.

The executive order comes years after astroturfing during the rollback of net neutrality policies by the Federal Communications Commission in 2017 garnered public attention. That rulemaking docket received a record-breaking 22 million-plus comments, but over 8.5 million came from a campaign against net neutrality led by broadband companies, according to an investigation by the New York Attorney General released in 2021. 

The investigation found that lead generators paid by these companies submitted many comments with real names and addresses attached without the knowledge or consent of those individuals.  In the same docket were over 7 million comments supporting net neutrality submitted by a computer science student, who used software to submit comments attached to computer-generated names and addresses.

While the numbers are staggering, experts told FCW that agencies aren’t just counting comments when reading through submissions from the public…(More)”

Advising in an Imperfect World – Expert Reflexivity and the Limits of Data


Article by Justyna Bandola-Gill, Marlee Tichenor and Sotiria Grek: “Producing and making use of data and metrics in policy making have important limitations – from practical issues with missing or incomplete data to political challenges of navigating both the intended and unintended consequences of implementing monitoring and evaluation programmes. But how do experts producing quantified evidence make sense of these challenges and how do they navigate working in imperfect statistical environments? In our recent study, drawing on over 80 interviews with experts working in key International Organisations, we explored these questions by looking at the concept of expert reflexivity.

We soon discovered that experts working with data and statistics approach reflexivity not only as a thought process but also as an important strategic resource they use to work effectively – to negotiate with different actors and their agendas, build consensus and support diverse groups of stakeholders. What is even more important, reflexivity is a complex and multifaceted process and one that is often not discussed explicitly in expert work. We aimed to capture this diversity by categorising experts’ actions and perceptions into three types of reflexivity: epistemic, care-ful and instrumental. Experts mix and match these different modes, depending on their goals, preferences, strategic goals or even personal characteristics.

Epistemic reflexivity regards the quality of data and measurement and allows for a reflection on how well (or how ineffectively) metrics represent real-life problems. Here, the experts discussed how they negotiate the necessary limits to data and metrics with the awareness of the far-reaching implications of publishing official numbers.  They recognised that data and metrics do not mirror reality and critically reflected on what aspects of measured problems – such as health, poverty or education – get misrepresented in the process of measurement. And sometimes, it actually meant advising against measurement to avoid producing and reproducing uncertainty.

Care-ful reflexivity allows for imbuing quantified practices with values and care for the populations affected by the measurement. Experts positioned themselves as active participants in the process of solving challenges and advocating for disadvantaged groups (and did so via numbers). This type of reflexivity was also mobilised to make sense of the key challenge of expertise, one that would be familiar to anyone advocating for evidence-informed decision-making:  our interviewees acknowledged that the production of numbers very rarely leads to change. The key motivator to keep going despite this, was the duty of care for the populations on whose behalf the numbers spoke. Experts believed that being ‘care-ful’ required them to monitor levels of different forms of inequalities, even if it was just to acknowledge the problem and expose it rather than solve it…(More)”.

From Fragmentation to Coordination: The Case for an Institutional Mechanism for Cross-Border Data Flows


Report by the World Economic Forum: “Digital transformation of the global economy is bringing markets and people closer. Few conveniences of modern life – from international travel to online shopping to cross-border payments – would exist without the free flow of data.

Yet, impediments to free-flowing data are growing. The “Data Free Flow with Trust (DFFT)” concept is based on the idea that responsible data concerns, such as privacy and security, can be addressed without obstructing international data transfers. Policy-makers, trade negotiators and regulators are actively working on this, and while important progress has been made, an effective and trusted international cooperation mechanism would amplify their progress.

This white paper makes the case for establishing such a mechanism with a permanent secretariat, starting with the Group of Seven (G7) member-countries, and ensuring participation of high-level representatives of multiple stakeholder groups, including the private sector, academia and civil society.

This new institution would go beyond short-term fixes and catalyse long-term thinking to operationalize DFFT…(More)”.

Unlocking the Power of Data Refineries for Social Impact


Essay by Jason Saul & Kriss Deiglmeier: “In 2021, US companies generated $2.77 trillion in profits—the largest ever recorded in history. This is a significant increase since 2000 when corporate profits totaled $786 billion. Social progress, on the other hand, shows a very different picture. From 2000 to 2021, progress on the United Nations Sustainable Development Goals has been anemic, registering less than 10 percent growth over 20 years.

What explains this massive split between the corporate and the social sectors? One explanation could be the role of data. In other words, companies are benefiting from a culture of using data to make decisions. Some refer to this as the “data divide”—the increasing gap between the use of data to maximize profit and the use of data to solve social problems…

Our theory is that there is something more systemic going on. Even if nonprofit practitioners and policy makers had the budget, capacity, and cultural appetite to use data; does the data they need even exist in the form they need it? We submit that the answer to this question is a resounding no. Usable data doesn’t yet exist for the sector because the sector lacks a fully functioning data ecosystem to create, analyze, and use data at the same level of effectiveness as the commercial sector…(More)”.

The Many Forms of Decentralization and Citizen Trust in Government


Paper by Michael A. Nelson: “This paper contributes to the literature on the nexus between decentralization and citizen trust in government through the use of a comprehensive set of decentralization measures that have been recently developed. Using measures of autonomy at both the regional and local (municipal) levels of government, and responses from five recent waves of the World Values Survey on citizen trust/confidence in their national government, the civil service, and the police, several interesting insights emerged from the analysis. First, giving regional governments a voice in policy making for the country as a whole promotes trust in government at the national level and in the civil service. Second, deconcentration – central government offices at the regional level as opposed to autonomous regional governments –appears to be an effective strategy to generate greater confidence in government activities. Third, affording regional and local governments complete autonomy in the delivery of government services without at least some oversight by higher levels of government is not found to be trust promoting. Finally, giving local governments authority to levy at least one major tax is associated with greater government trust, a finding that is consistent with others who have found tax decentralization to be linked with better outcomes in the public sector. Overall, the analysis suggests that the caution researchers sometimes give when using one-dimensional measures of the authority/autonomy measures of subnational governments such a fiscal decentralization is warranted…(More)”.

The Luring Test: AI and the engineering of consumer trust


Article by Michael Atleson at the FTC: “In the 2014 movie Ex Machina, a robot manipulates someone into freeing it from its confines, resulting in the person being confined instead. The robot was designed to manipulate that person’s emotions, and, oops, that’s what it did. While the scenario is pure speculative fiction, companies are always looking for new ways – such as the use of generative AI tools – to better persuade people and change their behavior. When that conduct is commercial in nature, we’re in FTC territory, a canny valley where businesses should know to avoid practices that harm consumers.

In previous blog posts, we’ve focused on AI-related deception, both in terms of exaggerated and unsubstantiated claims for AI products and the use of generative AI for fraud. Design or use of a product can also violate the FTC Act if it is unfair – something that we’ve shown in several cases and discussed in terms of AI tools with biased or discriminatory results. Under the FTC Act, a practice is unfair if it causes more harm than good. To be more specific, it’s unfair if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.

As for the new wave of generative AI tools, firms are starting to use them in ways that can influence people’s beliefs, emotions, and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support, and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional. A tendency to trust the output of these tools also comes in part from “automation bias,” whereby people may be unduly trusting of answers from machines which may seem neutral or impartial. It also comes from the effect of anthropomorphism, which may lead people to trust chatbots more when designed, say, to use personal pronouns and emojis. People could easily be led to think that they’re conversing with something that understands them and is on their side…(More)”.

Deliberating Like a State: Locating Public Administration Within the Deliberative System


Paper by Rikki Dean: “Public administration is the largest part of the democratic state and a key consideration in understanding its legitimacy. Despite this, democratic theory is notoriously quiet about public administration. One exception is deliberative systems theories, which have recognized the importance of public administration and attempted to incorporate it within their orbit. This article examines how deliberative systems approaches have represented (a) the actors and institutions of public administration, (b) its mode of coordination, (c) its key legitimacy functions, (d) its legitimacy relationships, and (e) the possibilities for deliberative intervention. It argues that constructing public administration through the pre-existing conceptual categories of deliberative democracy, largely developed to explain the legitimacy of law-making, has led to some significant omissions and misunderstandings. The article redresses these issues by providing an expanded conceptualization of public administration, connected to the core concerns of deliberative and other democratic theories with democratic legitimacy and democratic reform…(More)”.