The Luring Test: AI and the engineering of consumer trust


Article by Michael Atleson at the FTC: “In the 2014 movie Ex Machina, a robot manipulates someone into freeing it from its confines, resulting in the person being confined instead. The robot was designed to manipulate that person’s emotions, and, oops, that’s what it did. While the scenario is pure speculative fiction, companies are always looking for new ways – such as the use of generative AI tools – to better persuade people and change their behavior. When that conduct is commercial in nature, we’re in FTC territory, a canny valley where businesses should know to avoid practices that harm consumers.

In previous blog posts, we’ve focused on AI-related deception, both in terms of exaggerated and unsubstantiated claims for AI products and the use of generative AI for fraud. Design or use of a product can also violate the FTC Act if it is unfair – something that we’ve shown in several cases and discussed in terms of AI tools with biased or discriminatory results. Under the FTC Act, a practice is unfair if it causes more harm than good. To be more specific, it’s unfair if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.

As for the new wave of generative AI tools, firms are starting to use them in ways that can influence people’s beliefs, emotions, and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support, and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional. A tendency to trust the output of these tools also comes in part from “automation bias,” whereby people may be unduly trusting of answers from machines which may seem neutral or impartial. It also comes from the effect of anthropomorphism, which may lead people to trust chatbots more when designed, say, to use personal pronouns and emojis. People could easily be led to think that they’re conversing with something that understands them and is on their side…(More)”.

Air-Pollution Knowledge Is Power


Article by Chana R. Schoenberger: “What happens when people in countries where the government offers little pollution monitoring learn that the air quality is dangerous? A new study details how the US Embassy in Beijing began to monitor the Chinese capital’s air-pollution levels and tweet about them in 2008. The program later extended to other US embassies in cities around the world. The practice led to a measurable decline in air pollution in those cities, few of which had local pollution monitoring before, the researchers found.

The paper’s authors, Akshaya Jha, an assistant professor of economics and public policy at Carnegie Mellon University, and Andrea La Nauze, a lecturer at the School of Economics at the University of Queensland, used satellite data to compare pollution levels, measured annually. The researchers found that the level of air pollution went down after the local US embassy began tweeting pollution numbers from monitoring equipment that diplomatic personnel had installed.

The embassy program yielded a drop in fine-particulate concentration levels of 2 to 4 micrograms per square meter, leading to a decline in premature mortality worth $127 million for the median city in 2019. “Our findings point to the substantial benefits of improving the availability and salience of air-quality information in low- and middle-income countries,” Jha and La Nauze write.

News coverage of the US government’s Beijing pollution monitoring sparked the researchers’ interest, La Nauze says. At the time, American diplomats were quoted saying that the embassy’s tweets led to marked changes in pollution levels in Beijing. When the researchers learned that the US State Department had extended the program to embassies around the world, they thought there might be a way to evaluate the diplomats’ claims empirically.

A problem the researchers confronted was how to quantify the impact of measuring something that had never been measured before…(More)” – See also: US Embassy Air-Quality Tweets Led to Global Health Benefits

Data Rivers: Carving Out the Public Domain in the Age of Generative AI


Paper by Sylvie Delacroix: “What if the data ecosystems that made the advent of generative AI possible are being undermined by those very tools? For tools such as GPT4 (it is but one example of a tool made possible by scraping data from the internet), the erection of IP ‘fences’ is an existential threat. European and British regulators are alert to it: so-called ‘text and data mining’ exceptions are at the heart of intense debates. In the US, these debates are taking place in court hearings structured around ‘fair use’. While the concerns of the corporations developing these tools are being heard, there is currently no reliable mechanism for members of the public to exert influence on the (re)-balancing of the rights and responsibilities that shape our ‘data rivers’. Yet the existential threat that stems from restricted public access to such tools is arguably greater.

When it comes to re-balancing the data ecosystems that made generative AI possible, much can be learned from age-old river management practices, with one important proviso: data not only carries traces of our past. It is also a powerful tool to envisage different futures. If data-powered technologies such as GPT4 are to live up to their potential, we would do well to invest in bottom-up empowerment infrastructure. Such infrastructure could not only facilitate the valorisation of and participation in the public domain. It could also help steer the (re)-development of ‘copyright as privilege’ in a way that is better able to address the varied circumstances of today’s original content creators…(More)”

Operationalizing digital self-determination


Paper by Stefaan G. Verhulst: “A proliferation of data-generating devices, sensors, and applications has led to unprecedented amounts of digital data. We live in an era of datafication, one in which life is increasingly quantified and transformed into intelligence for private or public benefit. When used responsibly, this offers new opportunities for public good. The potential of data is evident in the possibilities offered by open data and data collaboratives—both instances of how wider access to data can lead to positive and often dramatic social transformation. However, three key forms of asymmetry currently limit this potential, especially for already vulnerable and marginalized groups: data asymmetries, information asymmetries, and agency asymmetries. These asymmetries limit human potential, both in a practical and psychological sense, leading to feelings of disempowerment and eroding public trust in technology. Existing methods to limit asymmetries (such as open data or consent) as well as some alternatives under consideration (data ownership, collective ownership, personal information management systems) have limitations to adequately address the challenges at hand. A new principle and practice of digital self-determination (DSD) is therefore required. The study and practice of DSD remain in its infancy. The characteristics we have outlined here are only exploratory, and much work remains to be done so as to better understand what works and what does not. We suggest the need for a new research framework or agenda to explore DSD and how it can address the asymmetries, imbalances, and inequalities—both in data and society more generally—that are emerging as key public policy challenges of our era…(More)”.

LGBTQ+ data availability


Report by Beyond Deng and Tara Watson: “LGBTQ+ (Lesbian, Gay, Bisexual, Transgender, Queer/Questioning) identification has doubled over the past decade, yet data on the overall LGBTQ+ population remains limited in large, nationally representative surveys such as the American Community Survey. These surveys are consistently used to understand the economic wellbeing of individuals, but they fail to fully capture information related to one’s sexual orientation and gender identity (SOGI).[1]

Asking incomplete SOGI questions leaves a gap in research that, if left unaddressed, will continue to grow in importance with the increase of the LGBTQ+ population, particularly among younger cohorts. In this report, we provide an overview of four large, nationally representative, and publicly accessible datasets that include information relevant for economic analysis. These include the Behavioral Risk Factor Surveillance System (BRFSS), National Health Interview Survey (NHIS), the American Community Survey (ACS), and the Census Household Pulse Survey. Each survey varies by sample size, sample unit, periodicity, geography, and the SOGI information they collect.[2]

The difference in how these datasets collect SOGI information impacts the estimates of LGBTQ+ prevalence. While we find considerable difference in measured LGBT prevalence across datasets, each survey documents a substantial increase in non-straight identity over time. Figure 1 shows that this is largely driven by young adults, who are increasingly likely to identify as LGBT over almost the past ten years. Using data from NHIS, around 4% of 18–24-year-olds in 2013 identified as LGB, which increased to 9.5% in 2021. Because of the short time horizon in these surveys, it is unclear how the current young adult cohort will identify as they age. Despite this, an important takeaway is that younger age groups clearly represent a substantial portion of the LGB community and are important to incorporate in economic analyses…(More)”.

AI in Hiring and Evaluating Workers: What Americans Think


Pew Research Center survey: “… finds crosscurrents in the public’s opinions as they look at the possible uses of AI in workplaces. Americans are wary and sometimes worried. For instance, they oppose AI use in making final hiring decisions by a 71%-7% margin, and a majority also opposes AI analysis being used in making firing decisions. Pluralities oppose AI use in reviewing job applications and in determining whether a worker should be promoted. Beyond that, majorities do not support the idea of AI systems being used to track workers’ movements while they are at work or keeping track of when office workers are at their desks.

Yet there are instances where people think AI in workplaces would do better than humans. For example, 47% think AI would do better than humans at evaluating all job applicants in the same way, while a much smaller share – 15% – believe AI would be worse than humans in doing that. And among those who believe that bias along racial and ethnic lines is a problem in performance evaluations generally, more believe that greater use of AI by employers would make things better rather than worse in the hiring and worker-evaluation process. 

Overall, larger shares of Americans than not believe AI use in workplaces will significantly affect workers in general, but far fewer believe the use of AI in those places will have a major impact on them personally. Some 62% think the use of AI in the workplace will have a major impact on workers generally over the next 20 years. On the other hand, just 28% believe the use of AI will have a major impact on them personally, while roughly half believe there will be no impact on them or that the impact will be minor…(More)”.

Data property, data governance and Common European Data Spaces


Paper by Thomas Margoni, Charlotte Ducuing and Luca Schirru: “The Data Act proposal of February 2022 constitutes a central element of a broader and ambitious initiative of the European Commission (EC) to regulate the data economy through the erection of a new general regulatory framework for data and digital markets. The resulting framework may be represented as a model of governance between a pure market-driven model and a fully regulated approach, thereby combining elements that traditionally belong to private law (e.g., property rights, contracts) and public law (e.g., regulatory authorities, limitation of contractual freedom). This article discusses the role of (intellectual) property rights as well as of other forms of rights allocation in data legislation with particular attention to the Data Act proposal. We argue that the proposed Data Act has the potential to play a key role in the way in which data, especially privately held data, may be accessed, used, and shared. Nevertheless, it is only by looking at the whole body of data (and data related) legislation that the broader plan for a data economy can be grasped in its entirety. Additionally, the Data Act proposal may also arguably reveal the elements for a transition from a property-based to a governance-based paradigm in the EU data strategy. Whereas elements of data governance abound, the stickiness of property rights and rhetoric seem however hard to overcome. The resulting regulatory framework, at least for now, is therefore an interesting but not always perfectly coordinated mix of both. Finally, this article suggests that the Data Act Proposal may have missed the chance to properly address the issue of data holders’ power and related information asymmetries, as well as the need for coordination mechanisms…(More)”.

Africa fell in love with crypto. Now, it’s complicated


Article by Martin K.N Siele: “Chiamaka, a former product manager at a Nigerian cryptocurrency startup, has sworn off digital currencies. The 22-year-old has weathered a layoff and lost savings worth 4,603,500 naira ($9,900) after the collapse of FTX in November 2022. She now works for a corporate finance company in Lagos, earning a salary that is 45% lower than her previous job.

“I used to be bullish on crypto because I believed it could liberate Africans financially,” Chiamaka, who asked to be identified by a pseudonym as she was concerned about breaching her contract with her current employer, told Rest of World. “Instead, it has managed to do the opposite so far … at least to me and a few of my friends.”

Chiamaka is among the tens of millions of Africans who bought into the cryptocurrency frenzy over the last few years. According to one estimate in mid-2022, around 53 million Africans owned crypto — 16.5% of the total global crypto users. Nigeria led with over 22 million users, ranking fourth globally. Blockchain startups and businesses on the continent raised $474 million in 2022, a 429% increase from the previous year, according to the African Blockchain Report. Young African creatives also became major proponents of non-fungible tokens (NFTs), taking inspiration from pop culture and the continent’s history. Several decentralized autonomous organizations (DAOs), touted as the next big thing, emerged across Africa…(More)”.

Accept All: Unacceptable? 


Report by Demos and Schillings: “…sought to investigate how our data footprints are being created and exploited online. It involved an exploratory investigation into how data sharing and data regulation practices are impacting citizens: looking into how individuals’ data footprints are created, what people experience when they want to exercise their data rights, and how they feel about how their data is being used. This was a novel approach, using live case studies as they embarked on a data odyssey in order to understand, in real time, the data challenge people face.

We then held a series of stakeholder roundtables with academics, lawyers, technologists, people working in industry and civil society, which focused on diagnosing the problems and what potential solutions already look like, or could look like in the future, across multiple stakeholder groups….(More)” See also: documentary produced by the project partners, law firm Schillings and the independent consumer data action service Rightly, and TVN, alongside this report, here.

End of data sharing could make Covid-19 harder to control, experts and high-risk patients warn


Article by Sam Whitehead: “…The federal government’s public health emergency that’s been in effect since January 2020 expires May 11. The emergency declaration allowed for sweeping changes in the U.S. health care system, like requiring state and local health departments, hospitals, and commercial labs to regularly share data with federal officials.

But some shared data requirements will come to an end and the federal government will lose access to key metrics as a skeptical Congress seems unlikely to grant agencies additional powers. And private projects, like those from The New York Times and Johns Hopkins University, which made covid data understandable and useful for everyday people, stopped collecting data in March.

Public health legal scholars, data experts, former and current federal officials, and patients at high risk of severe covid outcomes worry the scaling back of data access could make it harder to control covid.

There have been improvements in recent years, such as major investments in public health infrastructure and updated data reporting requirements in some states. But concerns remain that the overall shambolic state of U.S. public health data infrastructure could hobble the response to any future threats.

“We’re all less safe when there’s not the national amassing of this information in a timely and coherent way,” said Anne Schuchat, former principal deputy director of the Centers for Disease Control and Prevention.

A lack of data in the early days of the pandemic left federal officials, like Schuchat, with an unclear picture of the rapidly spreading coronavirus. And even as the public health emergency opened the door for data-sharing, the CDC labored for months to expand its authority.

Eventually, more than a year into the pandemic, the CDC gained access to data from private health care settings, such as hospitals and nursing homes, commercial labs, and state and local health departments…(More)”. See also: Why we still need data to understand the COVID-19 pandemic