Paper by Amy Whitaker: “While artificial intelligence (AI) stands to transform artistic practice and creative industries, little has been theorized about who owns AI for creative work. Lawsuits brought against AI companies such as OpenAI and Meta under copyright law invite novel reconsideration of the value of creative work. This paper synthesizes across copyright, hybrid practice, and cooperative governance to work toward collective ownership and decision-making. This paper adds to research in arts entrepreneurship because copyright and shared value is so vital to the livelihood of working artists, including writers, filmmakers, and others in the creative industries. Sarah Silverman’s lawsuit against OpenAI is used as the main case study. The conceptual framework of material and machine, one and many, offers a lens onto value creation and shared ownership of AI. The framework includes a reinterpretation of the fourth factor of fair use under U.S. copyright law to refocus on the doctrinal language of value. AI uses the entirety of creative work in a way that is overlooked because of the small scale of one whole work relative to the overall size of the AI model. Yet a theory of value for creative work gives it dignity in its smallness, the way that one vote still has dignity in a national election of millions. As we navigate these frontiers of AI, experimental models pioneered by artists may be instructive far outside the arts…(More)”.
Scientists around the world call to protect research on one of humanity’s greatest short-term threats – Disinformation
Forum on Democracy and Information: “At a critical time for understanding digital communications’ impact on societies, research on disinformation is endangered.
In August, researchers around the world bid farewell to CrowdTangle – the Meta-owned social media monitoring tool. The decision by Meta to close the number one platform used to track mis- and disinformation, in what is a major election year, only to present its alternative tool Meta Content Library and API, has been met with a barrage of criticism.
If, as suggested by the World Economic Forum’s 2024 global risk report, disinformation is one of the biggest short-term threats to humanity, our collective ability to understand how it spreads and impacts our society is crucial. Just as we would not impede scientific research into the spread of viruses and disease, nor into natural ecosystems or other historical and social sciences, disinformation research must be permitted to be carried out unimpeded and with access to information needed to understand its complexity. Understanding the political economy of disinformation as well as its technological dimensions is also a matter of public health, democratic resilience, and national security.
By directly affecting the research community’s ability to open social media black boxes, this radical decision will also, in turn, hamper public understanding of how technology affects democracy. Public interest scrutiny is also essential for the next era of technology, notably for the world’s largest AI systems, which are similarly proprietary and opaque. The research community is already calling on AI companies to learn from the mistakes of social media and guarantee protections for good faith research. The solution falls on multiple shoulders and the global scientific community, civil society, public institutions and philanthropies must come together to meaningfully foster and protect public interest research on information and democracy…(More)”.
Unlocking AI for All: The Case for Public Data Banks
Article by Kevin Frazier: “The data relied on by OpenAI, Google, Meta, and other artificial intelligence (AI) developers is not readily available to other AI labs. Google and Meta relied, in part, on data gathered from their own products to train and fine-tune their models. OpenAI used tactics to acquire data that now would not work or may be more likely to be found in violation of the law (whether such tactics violated the law when originally used by OpenAI is being worked out in the courts). Upstart labs as well as research outfits find themselves with a dearth of data. Full realization of the positive benefits of AI, such as being deployed in costly but publicly useful ways (think tutoring kids or identifying common illnesses), as well as complete identification of the negative possibilities of AI (think perpetuating cultural biases) requires that labs other than the big players have access to quality, sufficient data.
The proper response is not to return to an exploitative status quo. Google, for example, may have relied on data from YouTube videos without meaningful consent from users. OpenAI may have hoovered up copyrighted data with little regard for the legal and social ramifications of that approach. In response to these questionable approaches, data has (rightfully) become harder to acquire. Cloudflare has equipped websites with the tools necessary to limit data scraping—the process of extracting data from another computer program. Regulators have developed new legal limits on data scraping or enforced old ones. Data owners have become more defensive over their content and, in some cases, more litigious. All of these largely positive developments from the perspective of data creators (which is to say, anyone and everyone who uses the internet) diminish the odds of newcomers entering the AI space. The creation of a public AI training data bank is necessary to ensure the availability of enough data for upstart labs and public research entities. Such banks would prevent those new entrants from having to go down the costly and legally questionable path of trying to hoover up as much data as possible…(More)”.
The Deletion Remedy
Paper by Daniel Wilf-Townsend: “A new remedy has emerged in the world of technology governance. Where someone has wrongfully obtained or used data, this remedy requires them to delete not only that data, but also to delete tools such as machine learning models that they have created using the data. Model deletion, also called algorithmic disgorgement or algorithmic destruction, has been increasingly sought in both private litigation and public enforcement actions. As its proponents note, model deletion can improve the regulation of privacy, intellectual property, and artificial intelligence by providing more effective deterrence and better management of ongoing harms
But, this article argues, model deletion has a serious flaw. In its current form, it has the possibility of being a grossly disproportionate penalty. Model deletion requires the destruction of models whose training included illicit data in any degree, with no consideration of how much (or even whether) that data contributed to any wrongful gains or ongoing harms. Model deletion could thereby cause unjust losses in litigation and chill useful technologies.
This article works toward a well-balanced doctrine of model deletion by building on the remedy’s equitable origins. It identifies how traditional considerations in equity—such as a defendant’s knowledge and culpability, the balance of the hardships, and the availability of more tailored alternatives—can be applied in model deletion cases to mitigate problems of disproportionality. By accounting for proportionality, courts and agencies can develop a doctrine of model deletion that takes advantage of its benefits while limiting its potential excesses…(More)”.
Zillow introduces First Street’s comprehensive climate risk data on for-sale listings across the US
Press Release: “Zillow® is introducing climate risk data, provided by First Street…Home shoppers will gain insights into five key risks—flood, wildfire, wind, heat and air quality—directly from listing pages, complete with risk scores, interactive maps and insurance requirements.
With more than 80% of buyers now considering climate risks when purchasing a home, this feature provides a clearer understanding of potential hazards, helping buyers to better assess long-term affordability and plan for the future. In assisting buyers to navigate the growing risk of climate change, Zillow is the only platform to feature tailored insurance recommendations alongside detailed historical insights, showing if or when a property has experienced past climate events, such as flooding or wildfires…
When using Zillow’s search map view, home shoppers can explore climate risk data through an interactive map highlighting five key risk categories: flood, wildfire, wind, heat and air quality. Each risk is color-coded and has its own color scale, helping consumers intuitively navigate their search. Informative labels give more context to climate data and link to First Street’s property-specific climate risk reports for full insights.
When viewing a for-sale property on Zillow, home shoppers will see a new climate risk section. This section includes a separate module for each risk category—flood, wildfire, wind, heat and air quality—giving detailed, property-specific data from First Street. This section not only shows how these risks might affect the home now and in the future, but also provides crucial information on wind, fire and flood insurance requirements.
Nationwide, more new listings came with major climate risk, compared to homes listed for sale five years ago, according to a Zillow analysis conducted in August. That trend holds true for all five of the climate risk categories Zillow analyzed. Across all new listings in August, 16.7% were at major risk of wildfire, while 12.8% came with a major risk of flooding…(More)”.
The paradox of climate data in West Africa: growing urgency coupled with diminishing accessibility
Cirad: “In 2022, a prolonged drought devastated maize crops in northern Burkina Faso, leaving two million people without sufficient food resources. This dramatic situation could have been better anticipated and its impacts could have been mitigated with the collection and equitable sharing of specific data: that of agrometeorology, the science that studies the effects of meteorological, climatological and hydrological factors on crops.
Although it is too late to prevent the 2022 drought, protecting people from future droughts remains an urgent priority, especially in Africa, a continent where climate change poses a serious threat to rainfed agriculture, its main agricultural and economic activity.
To anticipate these climate risks, it is essential to have access to reliable meteorological data, which is crucial for ensuring sustainable and resilient agricultural practices. Yet in West Africa, the accessibility and reliability of this data are increasingly threatened and face unprecedented diplomatic, economic and security challenges…(More)”.
Harnessing digital footprint data for population health: a discussion on collaboration, challenges and opportunities in the UK
Paper by Romana Burgess et al: “Digital footprint data are inspiring a new era in population health and well-being research. Linking these novel data with other datasets is critical for future research wishing to use these data for the public good. In order to succeed, successful collaboration among industry, academics and policy-makers is vital. Therefore, we discuss the benefits and obstacles for these stakeholder groups in using digital footprint data for research in the UK. We advocate for policy-makers’ inclusion in research efforts, stress the exceptional potential of digital footprint research to impact policy-making and explore the role of industry as data providers, with a focus on shared value, commercial sensitivity, resource requirements and streamlined processes. We underscore the importance of multidisciplinary approaches, consumer trust and ethical considerations in navigating methodological challenges and further call for increased public engagement to enhance societal acceptability. Finally, we discuss how to overcome methodological challenges, such as reproducibility and sharing of learnings, in future collaborations. By adopting a multiperspective approach to outlining the challenges of working with digital footprint data, our contribution helps to ensure that future research can navigate these challenges effectively while remaining reproducible, ethical and impactful…(More)”
Federal Court Invalidates NYC Law Requiring Food Delivery Apps to Share Customer Data with Restaurants
Article by Hunton, Andrews, Kurth: “On September 24, 2024, a federal district court held that New York City’s “Customer Data Law” violates the First Amendment. Passed in the summer of 2021, the law requires food-delivery apps to share customer-specific data with restaurants that prepare delivered meals.
The New York City Council enacted the Customer Data Law to boost the local restaurant industry in the wake of the pandemic. The law requires food-delivery apps to provide restaurants (upon the restaurants’ request) with each diner’s full name, email address, phone number, delivery address, and order contents. Customers may opt out of such sharing. The law’s supporters argue that requiring such disclosure addresses exploitation by the delivery apps and helps restaurants advertise more effectively.
Normally, when a customer places an order through a food-delivery app, the app provides the restaurant with the customer’s first name, last initial and food order. Food-delivery apps share aggregate data analytics with restaurants but generally do not share customer-specific data beyond the information necessary to fulfill an order. Some apps, for example, provide restaurants with data related to their menu performance, customer feedback and daily operations.
Major food-delivery app companies challenged the Customer Data Law, arguing that its data sharing requirement compels speech impermissibly under the First Amendment. Siding with the apps, the U.S. District Court for the Southern District of New York declared the city’s law invalid, holding that its data sharing requirement is not appropriately tailored to a substantial government interest…(More)”.
Climate and health data website launched
Article by Susan Cosier: “A new website of data resources, tools, and training materials that can aid researchers in studying the consequences of climate change on the health of communities nationwide is now available. At the end of July, NIEHS launched the Climate and Health Outcomes Research Data Systems (CHORDS) website, which includes a catalog of environmental and health outcomes data from various government and nongovernmental agencies.
The website provides a few resources of interest, including a catalog of data resources to aid researchers in finding relevant data for their specific research projects; an online training toolkit that provides tutorials and walk-throughs of downloading, integrating, and visualizing health and environmental data; a listing of publications of note on wildfire and health research; and links to existing resources, such as the NIEHS climate change and health glossary and literature portal.
The catalog includes a listing of dozens of data resources provided by different federal and state environmental and health sources. Users can sort the listing based on environmental and health measures of interest — such as specific air pollutants or chemicals — from data providers including NASA and the U.S. Environmental Protection Agency with many more to come…(More)”.
Rethinking ‘Checks and Balances’ for the A.I. Age
Article by Steve Lohr: “A new project, orchestrated by Stanford University and published on Tuesday, is inspired by the Federalist Papers and contends that today is a broadly similar historical moment of economic and political upheaval that calls for a rethinking of society’s institutional arrangements.
In an introduction to its collection of 12 essays, called the Digitalist Papers, the editors overseeing the project, including Erik Brynjolfsson, director of the Stanford Digital Economy Lab, and Condoleezza Rice, secretary of state in the George W. Bush administration and director of the Hoover Institution, identify their overarching concern.
“A powerful new technology, artificial intelligence,” they write, “explodes onto the scene and threatens to transform, for better or worse, all legacy social institutions.”
The most common theme in the diverse collection of essays: Citizens need to be more involved in determining how to regulate and incorporate A.I. into their lives. “To build A.I. for the people, with the people,” as one essay summed it up.
The project is being published as the technology is racing ahead. A.I. enthusiasts see a future of higher economic growth, increased prosperity and a faster pace of scientific discovery. But the technology is also raising fears of a dystopian alternative — A.I. chatbots and automated software not only replacing millions of workers, but also generating limitless misinformation and worsening political polarization. How to govern and guide A.I. in the public interest remains an open question…(More)”.