Chapter by Stefaan Verhulst in Handbook of Media and Communication Governance edited by Manuel Puppis , Robin Mansell , and Hilde Van den Bulck: “The internet and the accompanying datafication were heralded to usher in a golden era of disintermediation. Instead, the modern data ecology witnessed a process of remediation, or ‘hyper-mediation’, resulting in governance challenges, many of which underlie broader socioeconomic difficulties. Particularly, the rise of data asymmetries and silos create new forms of scarcity and dominance with deleterious political, economic and cultural consequences. Responding to these challenges requires a new data governance framework, focused on unlocking data and developing a more data pluralistic ecosystem. We argue for regulation and policy focused on promoting data collaboratives, an emerging form of cross-sectoral partnership; and on the establishment of data stewards, individuals/groups tasked with managing and responsibly sharing organizations’ data assets. Some regulatory steps are discussed, along with the various ways in which these two emerging stakeholders can help alleviate data scarcities and their associated problems…(More)”
Regulating the Direction of Innovation
Paper by Joshua S. Gans: “This paper examines the regulation of technological innovation direction under uncertainty about potential harms. We develop a model with two competing technological paths and analyze various regulatory interventions. Our findings show that market forces tend to inefficiently concentrate research on leading paths. We demonstrate that ex post regulatory instruments, particularly liability regimes, outperform ex ante restrictions in most scenarios. The optimal regulatory approach depends critically on the magnitude of potential harm relative to technological benefits. Our analysis reveals subtle insurance motives in resource allocation across research paths, challenging common intuitions about diversification. These insights have important implications for regulating emerging technologies like artificial intelligence, suggesting the need for flexible, adaptive regulatory frameworks…(More)”.
Civic Monitoring for Environmental Law Enforcement
Book by Anna Berti Suman: “This book presents a thought-provoking inquiry demonstrating how civic environmental monitoring can support law enforcement. It provides an in-depth analysis of applicable legal frameworks and conventions such as the Aarhus Convention, with an enlightening discussion on the civic right to contribute environmental information.
Civic Monitoring for Environmental Law Enforcement discusses multi- and interdisciplinary research into how civil society uses monitoring techniques to gather evidence of environmental issues. The book argues that civic monitoring is a constructive approach for finding evidence of environmental wrongdoings and for leveraging this evidence in different institutional fora, including judicial proceedings and official reporting for environmental protection agencies. It also reveals the challenges and implications associated with a greater reliance on civic monitoring practices by institutions and society at large.
Adopting original methodological approaches to drive inspiration for further research, this book is an invaluable resource for students and scholars of environmental governance and regulation, environmental law, politics and policy, and science and technology studies. It is also beneficial to civil society actors, civic initiatives, legal practitioners, and policymakers working in institutions engaged in the application of environmental law…(More)”
Using AI to Map Urban Change
Brief by Tianyuan Huang, Zejia Wu, Jiajun Wu, Jackelyn Hwang, Ram Rajagopal: “Cities are constantly evolving, and better understanding those changes facilitates better urban planning and infrastructure assessments and leads to more sustainable social and environmental interventions. Researchers currently use data such as satellite imagery to study changing urban environments and what those changes mean for public policy and urban design. But flaws in the current approaches, such as inadequately granular data, limit their scalability and their potential to inform public policy across social, political, economic, and environmental issues.
Street-level images offer an alternative source of insights. These images are frequently updated and high-resolution. They also directly capture what’s happening on a street level in a neighborhood or across a city. Analyzing street-level images has already proven useful to researchers studying socioeconomic attributes and neighborhood gentrification, both of which are essential pieces of information in urban design, sustainability efforts, and public policy decision-making for cities. Yet, much like other data sources, street-level images present challenges: accessibility limits, shadow and lighting issues, and difficulties scaling up analysis.
To address these challenges, our paper “CityPulse: Fine-Grained Assessment of Urban Change with Street View Time Series” introduces a multicity dataset of labeled street-view images and proposes a novel artificial intelligence (AI) model to detect urban changes such as gentrification. We demonstrate the change-detection model’s effectiveness by testing it on images from Seattle, Washington, and show that it can provide important insights into urban changes over time and at scale. Our data-driven approach has the potential to allow researchers and public policy analysts to automate and scale up their analysis of neighborhood and citywide socioeconomic change…(More)”.
Visualization for Public Involvement
Report by the National Academies of Sciences, Engineering, and Medicine: “Visualization methods have long been integral to the public involvement process for transportation planning and project development. From well-established methods such as conceptual sketches or photo simulations to the latest immersive technologies, state departments of transportation (DOTs) recognize that visualizations can significantly increase public understanding of a project’s appearance and physical impacts. Emerging methods such as interactive three-dimensional environments, virtual reality, and augmented reality can dramatically enhance public understanding of transportation options and design concepts…(More)”.
Rejecting Public Utility Data Monopolies
Paper by Amy L. Stein: “The threat of monopoly power looms large today. Although not the telecommunications and tobacco monopolies of old, the Goliaths of Big Tech have become today’s target for potential antitrust violations. It is not only their control over the social media infrastructure and digital advertising technologies that gives people pause, but their monopolistic collection, use, and sale of customer data. But large technology companies are not the only private companies that have exclusive access to your data; that can crowd out competitors; and that can hold, use, or sell your data with little to no regulation. These other private companies are not data companies, platforms, or even brokers. They are public utilities.
Although termed “public utilities,” these entities are overwhelmingly private, shareholder-owned entities. Like private Big Tech, utilities gather incredible amounts of data from customers and use this data in various ways. And like private Big Tech, these utilities can exercise exclusionary and self-dealing anticompetitive behavior with respect to customer data. But there is one critical difference— unlike Big Tech, utilities enjoy an implied immunity from antitrust laws. This state action immunity has historically applied to utility provision of essential services like electricity and heat. As utilities find themselves in the position of unsuspecting data stewards, however, there is a real and unexplored question about whether their long- enjoyed antitrust immunity should extend to their data practices.
As the first exploration of this question, this Article tests the continuing application and rationale of the state action immunity doctrine to the evolving services that a utility provides as the grid becomes digitized. It demonstrates the importance of staunching the creep of state action immunity over utility data practices. And it recognizes the challenges of developing remedies for such data practices that do not disrupt the state-sanctioned monopoly powers of utilities over the provision of essential services. This Article analyzes both antitrust and regulatory remedies, including a new customer- focused “data duty,” as possible mechanisms to enhance consumer (ratepayer) welfare in this space. Exposing utility data practices to potential antitrust liability may be just the lever that is needed to motivate states, public utility commissions, and utilities to develop a more robust marketplace for energy data…(More)”.
This is AI’s brain on AI
Article by Alison Snyder Data to train AI models increasingly comes from other AI models in the form of synthetic data, which can fill in chatbots’ knowledge gaps but also destabilize them.
The big picture: As AI models expand in size, their need for data becomes insatiable — but high quality human-made data is costly, and growing restrictions on the text, images and other kinds of data freely available on the web are driving the technology’s developers toward machine-produced alternatives.
State of play: AI-generated data has been used for years to supplement data in some fields, including medical imaging and computer vision, that use proprietary or private data.
- But chatbots are trained on public data collected from across the internet that is increasingly being restricted — while at the same time, the web is expected to be flooded with AI-generated content.
Those constraints and the decreasing cost of generating synthetic data are spurring companies to use AI-generated data to help train their models.
- Meta, Google, Anthropic and others are using synthetic data — alongside human-generated data — to help train the AI models that power their chatbots.
- Google DeepMind’s new AlphaGeometry 2 system that can solve math Olympiad problems is trained from scratch on synthetic data…(More)”
Generative Discrimination: What Happens When Generative AI Exhibits Bias, and What Can Be Done About It
Paper by Philipp Hacker, Frederik Zuiderveen Borgesius, Brent Mittelstadt and Sandra Wachter: “Generative AI (genAI) technologies, while beneficial, risk increasing discrimination by producing demeaning content and subtle biases through inadequate representation of protected groups. This chapter examines these issues, categorizing problematic outputs into three legal categories: discriminatory content; harassment; and legally hard cases like harmful stereotypes. It argues for holding genAI providers and deployers liable for discriminatory outputs and highlights the inadequacy of traditional legal frameworks to address genAI-specific issues. The chapter suggests updating EU laws to mitigate biases in training and input data, mandating testing and auditing, and evolving legislation to enforce standards for bias mitigation and inclusivity as technology advances…(More)”.
A.I. May Save Us, or May Construct Viruses to Kill Us
Article by Nicholas Kristof: “Here’s a bargain of the most horrifying kind: For less than $100,000, it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.
That’s the conclusion of Jason Matheny, the president of the RAND Corporation, a think tank that studies security matters and other issues.
“It wouldn’t cost more to create a pathogen that’s capable of killing hundreds of millions of people versus a pathogen that’s only capable of killing hundreds of thousands of people,” Matheny told me.
In contrast, he noted, it could cost billions of dollars to produce a new vaccine or antiviral in response…
In the early 2000s, some of us worried about smallpox being reintroduced as a bioweapon if the virus were stolen from the labs in Atlanta and in Russia’s Novosibirsk region that retain the virus since the disease was eradicated. But with synthetic biology, now it wouldn’t have to be stolen.
Some years ago, a research team created a cousin of the smallpox virus, horse pox, in six months for $100,000, and with A.I. it could be easier and cheaper to refine the virus.
One reason biological weapons haven’t been much used is that they can boomerang. If Russia released a virus in Ukraine, it could spread to Russia. But a retired Chinese general has raised the possibility of biological warfare that targets particular races or ethnicities (probably imperfectly), which would make bioweapons much more useful. Alternatively, it might be possible to develop a virus that would kill or incapacitate a particular person, such as a troublesome president or ambassador, if one had obtained that person’s DNA at a dinner or reception.
Assessments of ethnic-targeting research by China are classified, but they may be why the U.S. Defense Department has said that the most important long-term threat of biowarfare comes from China.
A.I. has a more hopeful side as well, of course. It holds the promise of improving education, reducing auto accidents, curing cancers and developing miraculous new pharmaceuticals.
One of the best-known benefits is in protein folding, which can lead to revolutionary advances in medical care. Scientists used to spend years or decades figuring out the shapes of individual proteins, and then a Google initiative called AlphaFold was introduced that could predict the shapes within minutes. “It’s Google Maps for biology,” Kent Walker, president of global affairs at Google, told me.
Scientists have since used updated versions of AlphaFold to work on pharmaceuticals including a vaccine against malaria, one of the greatest killers of humans throughout history.
So it’s unclear whether A.I. will save us or kill us first…(More)”.
Future-proofing government data
Article by Amy Jones: “Vast amounts of data are fueling innovation and decision-making, and agencies representing the United States government are custodian to some of the largest repositories of data in the world. As one of the world’s largest data creators and consumers, the federal government has made substantial investments in sourcing, curating, and leveraging data across many domains. However, the increasing reliance on artificial intelligence to extract insights and drive efficiencies necessitates a strategic pivot: agencies must evolve data management practices to identify and discriminate synthetic data from organic sources to safeguard the integrity and utility of data assets.
AI’s transformative potential is contingent on the availability of high-quality data. Data readiness includes attention to quality, accuracy, completeness, consistency, timeliness and relevance, at a minimum, and agencies are adopting robust data governance frameworks that enforce data quality standards at every stage of the data lifecycle. This includes implementing advanced data validation techniques, fostering a culture of data stewardship, and leveraging state-of-the-art tools for continuous data quality monitoring…(More)”.