The People and the Experts


Paper by William D. Nordhaus & Douglas Rivers: “Are speculators driving up oil prices? Should we raise energy prices to slow global warming? The present study takes a small number of such questions and compares the views of economic experts with those of the public. This comparison uses a panel of more than 2000 respondents from YouGov with the views of the panel of experts from the Initiative on Global Markets at the Chicago Booth School. We found that most of the US population is at best modestly informed about major economic questions and policies. The low level of knowledge is generally associated with the intrusion of ideological, political, and religious views that challenge or deny the current economic consensus. The intruding factors are highly heterogeneous across questions and sub-populations and are much more diverse than the narrowness of public political discourse would suggest. Many of these findings have been established for scientific subjects, but they appear to be equally important for economic views…(More)”.

Data Sharing Between Public and Private Sectors: When Local Governments Seek Information from the Sharing Economy.


Paper by the Centre for Information Policy Leadership: “…addresses the growing trend of localities requesting (and sometimes mandating) that data collected by the private sector be shared with the localities themselves. Such requests are generally not in the context of law enforcement or national security matters, but rather are part of an effort to further the public interest or promote a public good.

To the extent such requests are overly broad or not specifically tailored to the stated public interest, CIPL believes that the public sector’s adoption of accountability measures—which CIPL has repeatedly promoted for the private sector—can advance responsible data sharing practices between the two sectors. It can also strengthen the public’s confidence in data-driven initiatives that seek to improve their communities…(More)”.

Spamming democracy


Article by Natalie Alms: “The White House’s Office of Information and Regulatory Affairs is considering AI’s effect in the regulatory process, including the potential for generative chatbots to fuel mass campaigns or inject spam comments into the federal agency rulemaking process.

A recent executive order directed the office to consider using guidance or tools to address mass comments, computer-generated comments and falsely attributed comments, something an administration official told FCW that OIRA is “moving forward” on.

Mark Febrezio, a senior policy analyst at George Washington University’s Regulatory Studies Center, has experimented with Open AI’s generative AI system ChatGPT to create what he called a “convincing” public comment submission to a Labor Department proposal. 

“Generative AI also takes the possibility of mass and malattributed comments to the next level,” wrote Fabrizio and co-author Bridget Dooling, research professor at the center, in a paper published in April by the Brookings Institution.

The executive order comes years after astroturfing during the rollback of net neutrality policies by the Federal Communications Commission in 2017 garnered public attention. That rulemaking docket received a record-breaking 22 million-plus comments, but over 8.5 million came from a campaign against net neutrality led by broadband companies, according to an investigation by the New York Attorney General released in 2021. 

The investigation found that lead generators paid by these companies submitted many comments with real names and addresses attached without the knowledge or consent of those individuals.  In the same docket were over 7 million comments supporting net neutrality submitted by a computer science student, who used software to submit comments attached to computer-generated names and addresses.

While the numbers are staggering, experts told FCW that agencies aren’t just counting comments when reading through submissions from the public…(More)”

Unlocking the Power of Data Refineries for Social Impact


Essay by Jason Saul & Kriss Deiglmeier: “In 2021, US companies generated $2.77 trillion in profits—the largest ever recorded in history. This is a significant increase since 2000 when corporate profits totaled $786 billion. Social progress, on the other hand, shows a very different picture. From 2000 to 2021, progress on the United Nations Sustainable Development Goals has been anemic, registering less than 10 percent growth over 20 years.

What explains this massive split between the corporate and the social sectors? One explanation could be the role of data. In other words, companies are benefiting from a culture of using data to make decisions. Some refer to this as the “data divide”—the increasing gap between the use of data to maximize profit and the use of data to solve social problems…

Our theory is that there is something more systemic going on. Even if nonprofit practitioners and policy makers had the budget, capacity, and cultural appetite to use data; does the data they need even exist in the form they need it? We submit that the answer to this question is a resounding no. Usable data doesn’t yet exist for the sector because the sector lacks a fully functioning data ecosystem to create, analyze, and use data at the same level of effectiveness as the commercial sector…(More)”.

The Untapped Potential of Computing and Cognition in Tackling Climate Change


Article by Adiba Proma, Robert Wachter and Ehsan Hoque: “Alongside the search for climate-protecting technologies like EVs, more effort needs to be directed to harnessing technology to promote climate-protecting behavior change. This will take focus, leadership, and cooperation among technologists, investors, business executives, educators, and governments. Unfortunately, such focus, leadership, and cooperation have been lacking.  

Persuading people to change their lifestyles to benefit the next generations is a significant challenge. We argue that simple changes in how technologies are built and deployed can significantly lower society’s carbon footprint. 

While it is challenging to influence human behavior, there are opportunities to offer nudges and just-in-time interventions by tweaking certain aspects of technology. For example, the “Climate Pledge Friendly” tag added to products that meet Amazon’s sustainability standards can help users identify and purchase ecofriendly products while shopping online [3]. Similarly, to help users make more ecofriendly choices while traveling, Google Flights provides information on average carbon dioxide emission for flights and Google Maps tags the “most fuel-efficient” route for vehicles. 

Computer scientists can draw on concepts from psychology, moral dilemma, and human cooperation to build technologies that can encourage people to lead ecofriendly lifestyles. Many mobile health applications have been developed to motivate people to exercise, eat a healthy diet, sleep better, and manage chronic diseases. Some apps designed to improve sleep, mental wellbeing, and calorie intake have as many as 200 million active users. The use of apps and other internet tools can be adapted to promote lifestyle changes for climate change. For example, Google Nest rewards users with a “leaf” when they meet an energy goal…(More)”.

The Luring Test: AI and the engineering of consumer trust


Article by Michael Atleson at the FTC: “In the 2014 movie Ex Machina, a robot manipulates someone into freeing it from its confines, resulting in the person being confined instead. The robot was designed to manipulate that person’s emotions, and, oops, that’s what it did. While the scenario is pure speculative fiction, companies are always looking for new ways – such as the use of generative AI tools – to better persuade people and change their behavior. When that conduct is commercial in nature, we’re in FTC territory, a canny valley where businesses should know to avoid practices that harm consumers.

In previous blog posts, we’ve focused on AI-related deception, both in terms of exaggerated and unsubstantiated claims for AI products and the use of generative AI for fraud. Design or use of a product can also violate the FTC Act if it is unfair – something that we’ve shown in several cases and discussed in terms of AI tools with biased or discriminatory results. Under the FTC Act, a practice is unfair if it causes more harm than good. To be more specific, it’s unfair if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.

As for the new wave of generative AI tools, firms are starting to use them in ways that can influence people’s beliefs, emotions, and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support, and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional. A tendency to trust the output of these tools also comes in part from “automation bias,” whereby people may be unduly trusting of answers from machines which may seem neutral or impartial. It also comes from the effect of anthropomorphism, which may lead people to trust chatbots more when designed, say, to use personal pronouns and emojis. People could easily be led to think that they’re conversing with something that understands them and is on their side…(More)”.

Deliberating Like a State: Locating Public Administration Within the Deliberative System


Paper by Rikki Dean: “Public administration is the largest part of the democratic state and a key consideration in understanding its legitimacy. Despite this, democratic theory is notoriously quiet about public administration. One exception is deliberative systems theories, which have recognized the importance of public administration and attempted to incorporate it within their orbit. This article examines how deliberative systems approaches have represented (a) the actors and institutions of public administration, (b) its mode of coordination, (c) its key legitimacy functions, (d) its legitimacy relationships, and (e) the possibilities for deliberative intervention. It argues that constructing public administration through the pre-existing conceptual categories of deliberative democracy, largely developed to explain the legitimacy of law-making, has led to some significant omissions and misunderstandings. The article redresses these issues by providing an expanded conceptualization of public administration, connected to the core concerns of deliberative and other democratic theories with democratic legitimacy and democratic reform…(More)”.

Enhancing Trust in Science and Democracy in an Age of Misinformation 


Article by Marcia McNutt and Michael Crow: “Therefore, we believe the scientific community must more fully embrace its vital role in producing and disseminating knowledge in democratic societies. In Science in a Democratic Society, philosopher Philip Kitcher reminds us that “science should be shaped to promote democratic ideals.” To produce outcomes that advance the public good, scientists must also assess the moral bases of their pursuits. Although the United States has implemented the democratically driven, publicly engaged, scientific culture that Vannevar Bush outlined in Science, the Endless Frontier in 1945, Kitcher’s moral message remains relevant to both conducting science and communicating the results to the public, which pays for much of the enterprise of scientific discovery and technological innovation. It’s on scientists to articulate the moral and public values of the knowledge that they produce in ways that can be understood by citizens and decisionmakers.

However, by organizing themselves largely into groups that rarely reach beyond their own disciplines and by becoming somewhat disconnected from their fellow citizens and from the values of society, many scientists have become less effective than will be necessary in the future. Scientific culture has often left informing or educating the public to other parties such as science teachers, journalists, storytellers, and filmmakers. Instead, scientists principally share the results of their research within the narrow confines of academic and disciplinary journals…(More)”.

LGBTQ+ data availability


Report by Beyond Deng and Tara Watson: “LGBTQ+ (Lesbian, Gay, Bisexual, Transgender, Queer/Questioning) identification has doubled over the past decade, yet data on the overall LGBTQ+ population remains limited in large, nationally representative surveys such as the American Community Survey. These surveys are consistently used to understand the economic wellbeing of individuals, but they fail to fully capture information related to one’s sexual orientation and gender identity (SOGI).[1]

Asking incomplete SOGI questions leaves a gap in research that, if left unaddressed, will continue to grow in importance with the increase of the LGBTQ+ population, particularly among younger cohorts. In this report, we provide an overview of four large, nationally representative, and publicly accessible datasets that include information relevant for economic analysis. These include the Behavioral Risk Factor Surveillance System (BRFSS), National Health Interview Survey (NHIS), the American Community Survey (ACS), and the Census Household Pulse Survey. Each survey varies by sample size, sample unit, periodicity, geography, and the SOGI information they collect.[2]

The difference in how these datasets collect SOGI information impacts the estimates of LGBTQ+ prevalence. While we find considerable difference in measured LGBT prevalence across datasets, each survey documents a substantial increase in non-straight identity over time. Figure 1 shows that this is largely driven by young adults, who are increasingly likely to identify as LGBT over almost the past ten years. Using data from NHIS, around 4% of 18–24-year-olds in 2013 identified as LGB, which increased to 9.5% in 2021. Because of the short time horizon in these surveys, it is unclear how the current young adult cohort will identify as they age. Despite this, an important takeaway is that younger age groups clearly represent a substantial portion of the LGB community and are important to incorporate in economic analyses…(More)”.

AI in Hiring and Evaluating Workers: What Americans Think


Pew Research Center survey: “… finds crosscurrents in the public’s opinions as they look at the possible uses of AI in workplaces. Americans are wary and sometimes worried. For instance, they oppose AI use in making final hiring decisions by a 71%-7% margin, and a majority also opposes AI analysis being used in making firing decisions. Pluralities oppose AI use in reviewing job applications and in determining whether a worker should be promoted. Beyond that, majorities do not support the idea of AI systems being used to track workers’ movements while they are at work or keeping track of when office workers are at their desks.

Yet there are instances where people think AI in workplaces would do better than humans. For example, 47% think AI would do better than humans at evaluating all job applicants in the same way, while a much smaller share – 15% – believe AI would be worse than humans in doing that. And among those who believe that bias along racial and ethnic lines is a problem in performance evaluations generally, more believe that greater use of AI by employers would make things better rather than worse in the hiring and worker-evaluation process. 

Overall, larger shares of Americans than not believe AI use in workplaces will significantly affect workers in general, but far fewer believe the use of AI in those places will have a major impact on them personally. Some 62% think the use of AI in the workplace will have a major impact on workers generally over the next 20 years. On the other hand, just 28% believe the use of AI will have a major impact on them personally, while roughly half believe there will be no impact on them or that the impact will be minor…(More)”.