Data Sharing Between Public and Private Sectors: When Local Governments Seek Information from the Sharing Economy.


Paper by the Centre for Information Policy Leadership: “…addresses the growing trend of localities requesting (and sometimes mandating) that data collected by the private sector be shared with the localities themselves. Such requests are generally not in the context of law enforcement or national security matters, but rather are part of an effort to further the public interest or promote a public good.

To the extent such requests are overly broad or not specifically tailored to the stated public interest, CIPL believes that the public sector’s adoption of accountability measures—which CIPL has repeatedly promoted for the private sector—can advance responsible data sharing practices between the two sectors. It can also strengthen the public’s confidence in data-driven initiatives that seek to improve their communities…(More)”.

Spatial data trusts: an emerging governance framework for sharing spatial data


Paper by Nenad Radosevic et al: “Data Trusts are an important emerging approach to enabling the much wider sharing of data from many different sources and for many different purposes, backed by the confidence of clear and unambiguous data governance. Data Trusts combine the technical infrastructure for sharing data with the governance framework of a legal trust. The concept of a data Trust applied specifically to spatial data offers significant opportunities for new and future applications, addressing some longstanding barriers to data sharing, such as location privacy and data sovereignty. This paper introduces and explores the concept of a ‘spatial data Trust’ by identifying and explaining the key functions and characteristics required to underpin a data Trust for spatial data. The work identifies five key features of spatial data Trusts that demand specific attention and connects these features to a history of relevant work in the field, including spatial data infrastructures (SDIs), location privacy, and spatial data quality. The conclusions identify several key strands of research for the future development of this rapidly emerging framework for spatial data sharing…(More)”.

From Fragmentation to Coordination: The Case for an Institutional Mechanism for Cross-Border Data Flows


Report by the World Economic Forum: “Digital transformation of the global economy is bringing markets and people closer. Few conveniences of modern life – from international travel to online shopping to cross-border payments – would exist without the free flow of data.

Yet, impediments to free-flowing data are growing. The “Data Free Flow with Trust (DFFT)” concept is based on the idea that responsible data concerns, such as privacy and security, can be addressed without obstructing international data transfers. Policy-makers, trade negotiators and regulators are actively working on this, and while important progress has been made, an effective and trusted international cooperation mechanism would amplify their progress.

This white paper makes the case for establishing such a mechanism with a permanent secretariat, starting with the Group of Seven (G7) member-countries, and ensuring participation of high-level representatives of multiple stakeholder groups, including the private sector, academia and civil society.

This new institution would go beyond short-term fixes and catalyse long-term thinking to operationalize DFFT…(More)”.

Unlocking the Power of Data Refineries for Social Impact


Essay by Jason Saul & Kriss Deiglmeier: “In 2021, US companies generated $2.77 trillion in profits—the largest ever recorded in history. This is a significant increase since 2000 when corporate profits totaled $786 billion. Social progress, on the other hand, shows a very different picture. From 2000 to 2021, progress on the United Nations Sustainable Development Goals has been anemic, registering less than 10 percent growth over 20 years.

What explains this massive split between the corporate and the social sectors? One explanation could be the role of data. In other words, companies are benefiting from a culture of using data to make decisions. Some refer to this as the “data divide”—the increasing gap between the use of data to maximize profit and the use of data to solve social problems…

Our theory is that there is something more systemic going on. Even if nonprofit practitioners and policy makers had the budget, capacity, and cultural appetite to use data; does the data they need even exist in the form they need it? We submit that the answer to this question is a resounding no. Usable data doesn’t yet exist for the sector because the sector lacks a fully functioning data ecosystem to create, analyze, and use data at the same level of effectiveness as the commercial sector…(More)”.

The Luring Test: AI and the engineering of consumer trust


Article by Michael Atleson at the FTC: “In the 2014 movie Ex Machina, a robot manipulates someone into freeing it from its confines, resulting in the person being confined instead. The robot was designed to manipulate that person’s emotions, and, oops, that’s what it did. While the scenario is pure speculative fiction, companies are always looking for new ways – such as the use of generative AI tools – to better persuade people and change their behavior. When that conduct is commercial in nature, we’re in FTC territory, a canny valley where businesses should know to avoid practices that harm consumers.

In previous blog posts, we’ve focused on AI-related deception, both in terms of exaggerated and unsubstantiated claims for AI products and the use of generative AI for fraud. Design or use of a product can also violate the FTC Act if it is unfair – something that we’ve shown in several cases and discussed in terms of AI tools with biased or discriminatory results. Under the FTC Act, a practice is unfair if it causes more harm than good. To be more specific, it’s unfair if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.

As for the new wave of generative AI tools, firms are starting to use them in ways that can influence people’s beliefs, emotions, and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support, and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional. A tendency to trust the output of these tools also comes in part from “automation bias,” whereby people may be unduly trusting of answers from machines which may seem neutral or impartial. It also comes from the effect of anthropomorphism, which may lead people to trust chatbots more when designed, say, to use personal pronouns and emojis. People could easily be led to think that they’re conversing with something that understands them and is on their side…(More)”.

Air-Pollution Knowledge Is Power


Article by Chana R. Schoenberger: “What happens when people in countries where the government offers little pollution monitoring learn that the air quality is dangerous? A new study details how the US Embassy in Beijing began to monitor the Chinese capital’s air-pollution levels and tweet about them in 2008. The program later extended to other US embassies in cities around the world. The practice led to a measurable decline in air pollution in those cities, few of which had local pollution monitoring before, the researchers found.

The paper’s authors, Akshaya Jha, an assistant professor of economics and public policy at Carnegie Mellon University, and Andrea La Nauze, a lecturer at the School of Economics at the University of Queensland, used satellite data to compare pollution levels, measured annually. The researchers found that the level of air pollution went down after the local US embassy began tweeting pollution numbers from monitoring equipment that diplomatic personnel had installed.

The embassy program yielded a drop in fine-particulate concentration levels of 2 to 4 micrograms per square meter, leading to a decline in premature mortality worth $127 million for the median city in 2019. “Our findings point to the substantial benefits of improving the availability and salience of air-quality information in low- and middle-income countries,” Jha and La Nauze write.

News coverage of the US government’s Beijing pollution monitoring sparked the researchers’ interest, La Nauze says. At the time, American diplomats were quoted saying that the embassy’s tweets led to marked changes in pollution levels in Beijing. When the researchers learned that the US State Department had extended the program to embassies around the world, they thought there might be a way to evaluate the diplomats’ claims empirically.

A problem the researchers confronted was how to quantify the impact of measuring something that had never been measured before…(More)” – See also: US Embassy Air-Quality Tweets Led to Global Health Benefits

Data Rivers: Carving Out the Public Domain in the Age of Generative AI


Paper by Sylvie Delacroix: “What if the data ecosystems that made the advent of generative AI possible are being undermined by those very tools? For tools such as GPT4 (it is but one example of a tool made possible by scraping data from the internet), the erection of IP ‘fences’ is an existential threat. European and British regulators are alert to it: so-called ‘text and data mining’ exceptions are at the heart of intense debates. In the US, these debates are taking place in court hearings structured around ‘fair use’. While the concerns of the corporations developing these tools are being heard, there is currently no reliable mechanism for members of the public to exert influence on the (re)-balancing of the rights and responsibilities that shape our ‘data rivers’. Yet the existential threat that stems from restricted public access to such tools is arguably greater.

When it comes to re-balancing the data ecosystems that made generative AI possible, much can be learned from age-old river management practices, with one important proviso: data not only carries traces of our past. It is also a powerful tool to envisage different futures. If data-powered technologies such as GPT4 are to live up to their potential, we would do well to invest in bottom-up empowerment infrastructure. Such infrastructure could not only facilitate the valorisation of and participation in the public domain. It could also help steer the (re)-development of ‘copyright as privilege’ in a way that is better able to address the varied circumstances of today’s original content creators…(More)”

Operationalizing digital self-determination


Paper by Stefaan G. Verhulst: “A proliferation of data-generating devices, sensors, and applications has led to unprecedented amounts of digital data. We live in an era of datafication, one in which life is increasingly quantified and transformed into intelligence for private or public benefit. When used responsibly, this offers new opportunities for public good. The potential of data is evident in the possibilities offered by open data and data collaboratives—both instances of how wider access to data can lead to positive and often dramatic social transformation. However, three key forms of asymmetry currently limit this potential, especially for already vulnerable and marginalized groups: data asymmetries, information asymmetries, and agency asymmetries. These asymmetries limit human potential, both in a practical and psychological sense, leading to feelings of disempowerment and eroding public trust in technology. Existing methods to limit asymmetries (such as open data or consent) as well as some alternatives under consideration (data ownership, collective ownership, personal information management systems) have limitations to adequately address the challenges at hand. A new principle and practice of digital self-determination (DSD) is therefore required. The study and practice of DSD remain in its infancy. The characteristics we have outlined here are only exploratory, and much work remains to be done so as to better understand what works and what does not. We suggest the need for a new research framework or agenda to explore DSD and how it can address the asymmetries, imbalances, and inequalities—both in data and society more generally—that are emerging as key public policy challenges of our era…(More)”.

LGBTQ+ data availability


Report by Beyond Deng and Tara Watson: “LGBTQ+ (Lesbian, Gay, Bisexual, Transgender, Queer/Questioning) identification has doubled over the past decade, yet data on the overall LGBTQ+ population remains limited in large, nationally representative surveys such as the American Community Survey. These surveys are consistently used to understand the economic wellbeing of individuals, but they fail to fully capture information related to one’s sexual orientation and gender identity (SOGI).[1]

Asking incomplete SOGI questions leaves a gap in research that, if left unaddressed, will continue to grow in importance with the increase of the LGBTQ+ population, particularly among younger cohorts. In this report, we provide an overview of four large, nationally representative, and publicly accessible datasets that include information relevant for economic analysis. These include the Behavioral Risk Factor Surveillance System (BRFSS), National Health Interview Survey (NHIS), the American Community Survey (ACS), and the Census Household Pulse Survey. Each survey varies by sample size, sample unit, periodicity, geography, and the SOGI information they collect.[2]

The difference in how these datasets collect SOGI information impacts the estimates of LGBTQ+ prevalence. While we find considerable difference in measured LGBT prevalence across datasets, each survey documents a substantial increase in non-straight identity over time. Figure 1 shows that this is largely driven by young adults, who are increasingly likely to identify as LGBT over almost the past ten years. Using data from NHIS, around 4% of 18–24-year-olds in 2013 identified as LGB, which increased to 9.5% in 2021. Because of the short time horizon in these surveys, it is unclear how the current young adult cohort will identify as they age. Despite this, an important takeaway is that younger age groups clearly represent a substantial portion of the LGB community and are important to incorporate in economic analyses…(More)”.

AI in Hiring and Evaluating Workers: What Americans Think


Pew Research Center survey: “… finds crosscurrents in the public’s opinions as they look at the possible uses of AI in workplaces. Americans are wary and sometimes worried. For instance, they oppose AI use in making final hiring decisions by a 71%-7% margin, and a majority also opposes AI analysis being used in making firing decisions. Pluralities oppose AI use in reviewing job applications and in determining whether a worker should be promoted. Beyond that, majorities do not support the idea of AI systems being used to track workers’ movements while they are at work or keeping track of when office workers are at their desks.

Yet there are instances where people think AI in workplaces would do better than humans. For example, 47% think AI would do better than humans at evaluating all job applicants in the same way, while a much smaller share – 15% – believe AI would be worse than humans in doing that. And among those who believe that bias along racial and ethnic lines is a problem in performance evaluations generally, more believe that greater use of AI by employers would make things better rather than worse in the hiring and worker-evaluation process. 

Overall, larger shares of Americans than not believe AI use in workplaces will significantly affect workers in general, but far fewer believe the use of AI in those places will have a major impact on them personally. Some 62% think the use of AI in the workplace will have a major impact on workers generally over the next 20 years. On the other hand, just 28% believe the use of AI will have a major impact on them personally, while roughly half believe there will be no impact on them or that the impact will be minor…(More)”.