Paper by Guang Han and Meredith T. Niles: “The potential for farmers and agriculture to sequester carbon and contribute to global climate change goals is widely discussed. However, there is currently low participation in agricultural carbon markets and a limited understanding of farmer perceptions and willingness to participate. Furthermore, farmers’ concerns regarding data privacy may complicate participation in agricultural carbon markets, which necessitates farmer data sharing with multiple entities. This study aims to address research gaps by assessing farmers’ willingness to participate in agricultural carbon markets, identifying the determinants of farmers’ willingness regarding carbon markets participation, and exploring how farmers’ concerns for data privacy relate to potential participation in agricultural carbon markets. Data were collected through a multistate survey of 246 farmers and analyzed using descriptive statistics, factor analysis, and multinomial regression models. We find that the majority of farmers (71.8%) are aware of carbon markets and would like to sell carbon credits, but they express high uncertainty about carbon market information, policies, markets, and cost impacts. Just over half of farmers indicated they would share their data for education, developing tools and models, and improving markets and supply chains. Farmers who wanted to participate in carbon markets were more likely to have higher farm revenues, more likely to share their data overall, more likely to share their data with private organizations, and more likely to change farming practices and had more positive perceptions of the impact of carbon markets on farm profitability. In conclusion, farmers have a general interest in carbon market participation, but more information is needed to address their uncertainties and concerns…(More)”.
Journalism Is a Public Good and Should Be Publicly Funded
Essay by Patrick Walters: “News deserts” have proliferated across the U.S. Half of the nation’s more than 3,140 counties now have only one newspaper—and nearly 200 of them have no paper at all. Of the publications that survive, researchers have found many are “ghosts” of their former selves.
Journalism has problems nationally: CNN announced hundreds of layoffs at the end of 2022, and National Geographic laid off the last of its staff writers this June. In the latter month the Los Angeles Times cut 13 percent of its newsroom staff. But the crisis is even more acute at the local level, with jobs in local news plunging from 71,000 in 2008 to 31,000 in 2020. Closures and cutbacks often leave people without reliable sources that can provide them with what the American Press Institute has described as “the information they need to make the best possible decisions about their daily lives.”
Americans need to understand that journalism is a vital public good—one that, like roads, bridges and schools, is worthy of taxpayer support. We are already seeing the disastrous effects of otherwise allowing news to disintegrate in the free market: namely, a steady supply of misinformation, often masquerading as legitimate news, and too many communities left without a quality source of local news. Former New York Times public editor Margaret Sullivan has a called this a “crisis of American democracy.”
The terms “crisis” and “collapse” have become nearly ubiquitous in the past decade when describing the state of American journalism, which has been based on a for-profit commercial model since the rise of the “penny press” in the 1830s. Now that commercial model has collapsed amid the near disappearance of print advertising. Digital ads have not come close to closing the gap because Google and other platforms have “hoovered up everything,” as Emily Bell, founding director of the Tow Center for Journalism at Columbia University, told the Nieman Journalism Lab in a 2018 interview. In June the newspaper chain Gannett sued Google’s parent company, alleging it has created an advertising monopoly that has devastated the news industry.
Other journalism models—including nonprofits such as MinnPost, collaborative efforts such Broke in Philly and citizen journalism—have had some success in fulfilling what Lewis Friedland of the University of Wisconsin–Madison called “critical community information needs” in a chapter of the 2016 book The Communication Crisis in America, and How to Fix It. Friedland classified those needs as falling in eight areas: emergencies and risks, health and welfare, education, transportation, economic opportunities, the environment, civic information and political information. Nevertheless, these models have proven incapable of fully filling the void, as shown by the dearth of quality information during the early years of the COVID pandemic. Scholar Michelle Ferrier and others have worked to bring attention to how news deserts leave many rural and urban areas “impoverished by the lack of fresh, daily local news and information,” as Ferrier wrote in a 2018 article. A recent study also found evidence that U.S. judicial districts with lower newspaper circulation were likely to see fewer public corruption prosecutions.
A growing chorus of voices is now calling for government-funded journalism, a model that many in the profession have long seen as problematic…(More)”.
Innovation Can Reboot American Democracy
Blog by Suzette Brooks Masters: “A thriving multiracial pluralist democracy is an aspiration that many people share for America. Far from being inevitable, the path to such a future is uncertain.
To stretch how we think about American democracy’s future iterations and begin to imagine the contours of the new, we need to learn from what’s emergent. So I’m going to take you on a whirlwind tour of some experiments taking place here and abroad that are the bright spots illuminating possible futures ahead.
My comments are informed by a research report I wrote last year called Imagining Better Futures for American Democracy. I interviewed dozens of visionaries in a range of fields and with diverse perspectives about the future of our democracy and the role positive visioning and futures thinking could play in reinvigorating it.
As I discuss these bright spots, I want to emphasize that what is most certain now is the accelerating and destabilizing change we are experiencing. It’s critical therefore to develop systems, institutions, norms and mindsets to navigate that change boldly and responsibly, not pretend that tomorrow will continue to look like today.
Yet when paradigms shift, as they inevitably do and I would argue are right now, that’s a messy and confusing time that can cause lots of anxiety and disorientation. During these critical periods of transition, we must set aside or ‘hospice” some assumptions, mindsets, practices, and institutions, while midwifing, or welcoming in, new ones.
This is difficult to do in the best of times but can be especially so when, collectively, we suffer from a lack of imagination and vision about what American democracy could and should become.
It’s not all our fault — inertia, fear, distrust, cynicism, diagnosis paralysis, polarization, exceptionalism, parochialism, and a pervasive, dystopian media environment are dragging us down. They create very strong headwinds weakening both our appetite and our ability to dream bigger and imagine better futures ahead.
However, focusing on and amplifying promising innovations can change that dysfunctional dynamic by inspiring us and providing blueprints to act upon when the time is right.
Below I discuss two main types of innovations in the political sphere: election-related structural reforms and governance reforms, including new forms of civic engagement and government decision-making…(More)”.
The Eyewitness Community Survey: An Engaging Citizen Science Tool to Capture Reliable Data while Improving Community Participants’ Environmental Health Knowledge and Attitudes
Paper by Melinda Butsch Kovacic: “Many youths and young adults have variable environmental health knowledge, limited understanding of their local environment’s impact on their health, and poor environmentally friendly behaviors. We sought to develop and test a tool to reliably capture data, increase environmental health knowledge, and engage youths as citizen scientists to examine and take action on their community’s challenges. The Eyewitness Community Survey (ECS) was developed through several iterations of co-design. Herein, we tested its performance. In Phase I, seven youths audited five 360° photographs. In Phase II, 27 participants works as pairs/trios and audited five locations, typically 7 days apart. Inter-rater and intra-rater reliability were determined. Changes in participants’ knowledge, attitudes, behaviors, and self-efficacy were surveyed. Feedback was obtained via focus groups. Intra-rater reliability was in the substantial/near-perfect range, with Phase II having greater consistency. Inter-rater reliability was high, with 42% and 63% of Phase I and II Kappa, respectively, in the substantial/near-perfect range. Knowledge scores improved after making observations (p ≤ 0.032). Participants (85%) reported the tool to be easy/very easy to use, with 70% willing to use it again. Thus, the ECS is a mutually beneficial citizen science tool that rigorously captures environmental data and provides engaging experiential learning opportunities…(More)”.
Leveraging Social Media Data for Emergency Preparedness and Response
Report by the National Academies of Sciences, Engineering, and Medicine: “Most state departments of transportation (DOTs) use social media to broadcast information and monitor emergencies, but few rely heavily on social media data. The most common barriers to using social media for emergencies are personnel availability and training, privacy issues, and data reliability.
NCHRP Synthesis 610: Leveraging Social Media Data for Emergency Preparedness and Response, from TRB’s National Cooperative Highway Research Program, documents state DOT practices that leverage social media data for emergency preparedness, response, and recovery…(More)”.
How Statisticians Should Grapple with Privacy in a Changing Data Landscape
Article by Joshua Snoke, and Claire McKay Bowen: “Suppose you had a data set that contained records of individuals, including demographics such as their age, sex, and race. Suppose also that these data contained additional in-depth personal information, such as financial records, health status, or political opinions. Finally, suppose that you wanted to glean relevant insights from these data using machine learning, causal inference, or survey sampling adjustments. What methods would you use? What best practices would you ensure you followed? Where would you seek information to help guide you in this process?…(More)”
To Save Society from Digital Tech, Enable Scrutiny of How Policies Are Implemented
Article by Ido Sivan-Sevilla: “…there is little discussion about how to create accountability when implementing tech policies. Decades of research exploring policy implementation across diverse areas consistently shows how successful implementation allows policies to be adapted and involves crucial bargaining. But this is rarely understood in the tech sector. For tech policies to work, those responsible for enforcement and compliance should be overseen and held to account. Otherwise, as history shows, tech policies will struggle to fulfill the intentions of their policymakers.
Scrutiny is required for three types of actors. First are regulators, who convert promising tech laws into enforcement practices but are often ill-equipped for their mission. My recent research found that across Europe, the rigor and methods of national privacy regulators tasked with enforcing the European Union’s GDPR vary greatly. The French data protection authority, for instance, proactively monitors for privacy violations and strictly sanctions companies that overstep; in contrast, Bulgarian authorities monitor passively and are hesitant to act. Reflecting on the first five years of the GDPR, Max Schrems, the chair of privacy watchdog NOYB, found authorities and courts reluctant to enforce the law, and companies free to take advantage: “It often feels like there is more energy spent in undermining the GDPR than in complying with it.” Variations in resources and technical expertise among regulators create regulatory arbitrage that the regulated eagerly exploit.
Tech companies are the second type of actor requiring scrutiny. Service providers such as Goolge, Meta, and Twitter, along with lesser-known technology companies, mediate digital services for billions around the world but enjoy considerable latitude on how and whether they comply with tech policies. Civil society groups, for instance, uncovered how Meta was trying to bypass the GDPR and use personal information for advertising…(More)”.
Attacks on Tax Privacy: How the Tax Prep Industry Enabled Meta to Harvest Millions of Taxpayers’ Sensitive Data
Congressional Report: “The investigation revealed that:
- Tax preparation companies shared millions of taxpayers’ data with Meta, Google, and other Big Tech firms: The tax prep companies used computer code – known as pixels – to send data to Meta and Google. While most websites use pixels, it is particularly reckless for online tax preparation websites to use them on webpages where tax return information is entered unless further steps are taken to ensure that the pixels do not access sensitive information. TaxAct, TaxSlayer, and H&R Block confirmed that they had used the Meta Pixel, and had been using it “for at least a couple of years” and all three companies had been using Google Analytics (GA) for even longer.
- Tax prep companies shared extraordinarily sensitive personal and financial information with Meta, which used the data for diverse advertising purposes: TaxAct, H&R Block, and TaxSlayer each revealed, in response to this Congressional inquiry, that they shared taxpayer data via their use of the Meta Pixel and Google’s tools. Although the tax prep companies and Big Tech firms claimed that all shared data was anonymous, the FTC and experts have indicated that the data could easily be used to identify individuals, or to create a dossier on them that could be used for targeted advertising or other purposes.
- Tax prep companies and Big Tech firms were reckless about their data sharing practices and their treatment of sensitive taxpayer data: The tax prep companies indicated that they installed the Meta and Google tools on their websites without fully understanding the extent to which they would send taxpayer data to these tech firms, without consulting with independent compliance or privacy experts, and without full knowledge of Meta’s use of and disposition of the data.
- Tax prep companies may have violated taxpayer privacy laws by sharing taxpayer data with Big Tech firms: Under the law, “a tax return preparer may not disclose or use a taxpayer’s tax return information prior to obtaining a written consent from the taxpayer,” – and they failed to do so when it came to the information that was turned over to Meta and Google. Tax prep companies can also turn over data to “auxiliary service providers in connection with the preparation of a tax return.” But Meta and Google likely do not meet the definition of “auxiliary service providers” and the data sharing with Meta was for advertising purposes – not “in connection with the preparation of a tax return.”…(More)”.
Combining Human Expertise with Artificial Intelligence: Experimental Evidence from Radiology
Paper by Nikhil Agarwal, Alex Moehring, Pranav Rajpurkar & Tobias Salz: “While Artificial Intelligence (AI) algorithms have achieved performance levels comparable to human experts on various predictive tasks, human experts can still access valuable contextual information not yet incorporated into AI predictions. Humans assisted by AI predictions could outperform both human-alone or AI-alone. We conduct an experiment with professional radiologists that varies the availability of AI assistance and contextual information to study the effectiveness of human-AI collaboration and to investigate how to optimize it. Our findings reveal that (i) providing AI predictions does not uniformly increase diagnostic quality, and (ii) providing contextual information does increase quality. Radiologists do not fully capitalize on the potential gains from AI assistance because of large deviations from the benchmark Bayesian model with correct belief updating. The observed errors in belief updating can be explained by radiologists’ partially underweighting the AI’s information relative to their own and not accounting for the correlation between their own information and AI predictions. In light of these biases, we design a collaborative system between radiologists and AI. Our results demonstrate that, unless the documented mistakes can be corrected, the optimal solution involves assigning cases either to humans or to AI, but rarely to a human assisted by AI…(More)”.
How Good Are Privacy Guarantees? Platform Architecture and Violation of User Privacy
Paper by Daron Acemoglu, Alireza Fallah, Ali Makhdoumi, Azarakhsh Malekian & Asuman Ozdaglar: “Many platforms deploy data collected from users for a multitude of purposes. While some are beneficial to users, others are costly to their privacy. The presence of these privacy costs means that platforms may need to provide guarantees about how and to what extent user data will be harvested for activities such as targeted ads, individualized pricing, and sales to third parties. In this paper, we build a multi-stage model in which users decide whether to share their data based on privacy guarantees. We first introduce a novel mask-shuffle mechanism and prove it is Pareto optimal—meaning that it leaks the least about the users’ data for any given leakage about the underlying common parameter. We then show that under any mask-shuffle mechanism, there exists a unique equilibrium in which privacy guarantees balance privacy costs and utility gains from the pooling of user data for purposes such as assessment of health risks or product development. Paradoxically, we show that as users’ value of pooled data increases, the equilibrium of the game leads to lower user welfare. This is because platforms take advantage of this change to reduce privacy guarantees so much that user utility declines (whereas it would have increased with a given mechanism). Even more strikingly, we show that platforms have incentives to choose data architectures that systematically differ from those that are optimal from the user’s point of view. In particular, we identify a class of pivot mechanisms, linking individual privacy to choices by others, which platforms prefer to implement and which make users significantly worse off…(More)”.