Data Protection Law and Emotion


Book by Damian Clifford: “Data protection law is often positioned as a regulatory solution to the risks posed by computational systems. Despite the widespread adoption of data protection laws, however, there are those who remain sceptical as to their capacity to engender change. Much of this criticism focuses on our role as ‘data subjects’. It has been demonstrated repeatedly that we lack the capacity to act in our own best interests and, what is more, that our decisions have negative impacts on others. Our decision-making limitations seem to be the inevitable by-product of the technological, social, and economic reality. Data protection law bakes in these limitations by providing frameworks for notions such as consent and subjective control rights and by relying on those who process our data to do so fairly.

Despite these valid concerns, Data Protection Law and Emotion argues that the (in)effectiveness of these laws are often more difficult to discern than the critical literature would suggest, while also emphasizing the importance of the conceptual value of subjective control. These points are explored (and indeed, exposed) by investigating data protection law through the lens of the insights provided by law and emotion scholarship and demonstrating the role emotions play in our decision-making. The book uses the development of Emotional Artificial Intelligence, a particularly controversial technology, as a case study to analyse these issues.

Original and insightful, Data Protection Law and Emotion offers a unique contribution to a contentious debate that will appeal to students and academics in data protection and privacy, policymakers, practitioners, and regulators…(More)”.

Relational ethics in health care automation


Paper by Frances Shaw and Anthony McCosker: “Despite the transformative potential of automation and clinical decision support technology in health care, there is growing urgency for more nuanced approaches to ethics. Relational ethics is an approach that can guide the responsible use of a range of automated decision-making systems including the use of generative artificial intelligence and large language models as they affect health care relationships. 

There is an urgent need for sector-wide training and scrutiny regarding the effects of automation using relational ethics touchstones, such as patient-centred health care, informed consent, patient autonomy, shared decision-making, empathy and the politics of care.

The purpose of this review is to offer a provocation for health care practitioners, managers and policy makers to consider the use automated tools in practice settings and examine how these tools might affect relationships and hence care outcomes…(More)”.

Modeling Cities and Regions as Complex Systems


Book by Roger White, Guy Engelen and Inge Uljee: “Cities and regions grow (or occasionally decline), and continuously transform themselves as they do so. This book describes the theory and practice of modeling the spatial dynamics of urban growth and transformation. As cities are complex, adaptive, self-organizing systems, the most appropriate modeling framework is one based on the theory of self-organizing systems—an approach already used in such fields as physics and ecology. The book presents a series of models, most of them developed using cellular automata (CA), which are inherently spatial and computationally efficient. It also provides discussions of the theoretical, methodological, and philosophical issues that arise from the models. A case study illustrates the use of these models in urban and regional planning. Finally, the book presents a new, dynamic theory of urban spatial structure that emerges from the models and their applications.

The models are primarily land use models, but the more advanced ones also show the dynamics of population and economic activities, and are integrated with models in other domains such as economics, demography, and transportation. The result is a rich and realistic representation of the spatial dynamics of a variety of urban phenomena. The book is unique in its coverage of both the general issues associated with complex self-organizing systems and the specifics of designing and implementing models of such systems…(More)”.

Constructing Valid Geospatial Tools for Environmental Justice


Report from the National Academies of Sciences, Engineering, and Medicine: “Decades of research have shown that the most disadvantaged communities exist at the intersection of high levels of hazard exposure, racial and ethnic marginalization, and poverty.

Mapping and geographical information systems have been crucial for analyzing the environmental burdens of marginalized communities, and several federal and state geospatial tools have emerged to help address environmental justice concerns — such as the Climate and Economic Justice Screening Tool developed in 2022 in response to Justice40 initiatives from the Biden administration.

Constructing Valid Geospatial Tools for Environmental Justice, a new report from the National Academies of Sciences, Engineering, and Medicine, offers recommendations for developing environmental justice tools that reflect the experiences of the communities they measure.

The report recommends data strategies focused on community engagement, validation, and documentation. It emphasizes using a structured development process and offers guidance for selecting and assessing indicators, integrating indicators, and incorporating cumulative impact scoring. Tool developers should choose measures of economic burden beyond the federal poverty level that account for additional dimensions of wealth and geographic variations in cost of living. They should also use indicators that measure the impacts of racism in policies and practices that have led to current disparities…(More)”.

Governing mediation in the data ecosystem: lessons from media governance for overcoming data asymmetries


Chapter by Stefaan Verhulst in Handbook of Media and Communication Governance edited by Manuel Puppis , Robin Mansell , and Hilde Van den Bulck: “The internet and the accompanying datafication were heralded to usher in a golden era of disintermediation. Instead, the modern data ecology witnessed a process of remediation, or ‘hyper-mediation’, resulting in governance challenges, many of which underlie broader socioeconomic difficulties. Particularly, the rise of data asymmetries and silos create new forms of scarcity and dominance with deleterious political, economic and cultural consequences. Responding to these challenges requires a new data governance framework, focused on unlocking data and developing a more data pluralistic ecosystem. We argue for regulation and policy focused on promoting data collaboratives, an emerging form of cross-sectoral partnership; and on the establishment of data stewards, individuals/groups tasked with managing and responsibly sharing organizations’ data assets. Some regulatory steps are discussed, along with the various ways in which these two emerging stakeholders can help alleviate data scarcities and their associated problems…(More)”

Using AI to Map Urban Change


Brief by Tianyuan Huang, Zejia Wu, Jiajun Wu, Jackelyn Hwang, Ram Rajagopal: “Cities are constantly evolving, and better understanding those changes facilitates better urban planning and infrastructure assessments and leads to more sustainable social and environmental interventions. Researchers currently use data such as satellite imagery to study changing urban environments and what those changes mean for public policy and urban design. But flaws in the current approaches, such as inadequately granular data, limit their scalability and their potential to inform public policy across social, political, economic, and environmental issues.

Street-level images offer an alternative source of insights. These images are frequently updated and high-resolution. They also directly capture what’s happening on a street level in a neighborhood or across a city. Analyzing street-level images has already proven useful to researchers studying socioeconomic attributes and neighborhood gentrification, both of which are essential pieces of information in urban design, sustainability efforts, and public policy decision-making for cities. Yet, much like other data sources, street-level images present challenges: accessibility limits, shadow and lighting issues, and difficulties scaling up analysis.

To address these challenges, our paper “CityPulse: Fine-Grained Assessment of Urban Change with Street View Time Series” introduces a multicity dataset of labeled street-view images and proposes a novel artificial intelligence (AI) model to detect urban changes such as gentrification. We demonstrate the change-detection model’s effectiveness by testing it on images from Seattle, Washington, and show that it can provide important insights into urban changes over time and at scale. Our data-driven approach has the potential to allow researchers and public policy analysts to automate and scale up their analysis of neighborhood and citywide socioeconomic change…(More)”.

Rejecting Public Utility Data Monopolies


Paper by Amy L. Stein: “The threat of monopoly power looms large today. Although not the telecommunications and tobacco monopolies of old, the Goliaths of Big Tech have become today’s target for potential antitrust violations. It is not only their control over the social media infrastructure and digital advertising technologies that gives people pause, but their monopolistic collection, use, and sale of customer data. But large technology companies are not the only private companies that have exclusive access to your data; that can crowd out competitors; and that can hold, use, or sell your data with little to no regulation. These other private companies are not data companies, platforms, or even brokers. They are public utilities.

Although termed “public utilities,” these entities are overwhelmingly private, shareholder-owned entities. Like private Big Tech, utilities gather incredible amounts of data from customers and use this data in various ways. And like private Big Tech, these utilities can exercise exclusionary and self-dealing anticompetitive behavior with respect to customer data. But there is one critical difference— unlike Big Tech, utilities enjoy an implied immunity from antitrust laws. This state action immunity has historically applied to utility provision of essential services like electricity and heat. As utilities find themselves in the position of unsuspecting data stewards, however, there is a real and unexplored question about whether their long- enjoyed antitrust immunity should extend to their data practices.

As the first exploration of this question, this Article tests the continuing application and rationale of the state action immunity doctrine to the evolving services that a utility provides as the grid becomes digitized. It demonstrates the importance of staunching the creep of state action immunity over utility data practices. And it recognizes the challenges of developing remedies for such data practices that do not disrupt the state-sanctioned monopoly powers of utilities over the provision of essential services. This Article analyzes both antitrust and regulatory remedies, including a new customer- focused “data duty,” as possible mechanisms to enhance consumer (ratepayer) welfare in this space. Exposing utility data practices to potential antitrust liability may be just the lever that is needed to motivate states, public utility commissions, and utilities to develop a more robust marketplace for energy data…(More)”.

This is AI’s brain on AI


Article by Alison Snyder Data to train AI models increasingly comes from other AI models in the form of synthetic data, which can fill in chatbots’ knowledge gaps but also destabilize them.

The big picture: As AI models expand in size, their need for data becomes insatiable — but high quality human-made data is costly, and growing restrictions on the text, images and other kinds of data freely available on the web are driving the technology’s developers toward machine-produced alternatives.

State of play: AI-generated data has been used for years to supplement data in some fields, including medical imaging and computer vision, that use proprietary or private data.

  • But chatbots are trained on public data collected from across the internet that is increasingly being restricted — while at the same time, the web is expected to be flooded with AI-generated content.

Those constraints and the decreasing cost of generating synthetic data are spurring companies to use AI-generated data to help train their models.

  • Meta, Google, Anthropic and others are using synthetic data — alongside human-generated data — to help train the AI models that power their chatbots.
  • Google DeepMind’s new AlphaGeometry 2 system that can solve math Olympiad problems is trained from scratch on synthetic data…(More)”

Generative Discrimination: What Happens When Generative AI Exhibits Bias, and What Can Be Done About It


Paper by Philipp Hacker, Frederik Zuiderveen Borgesius, Brent Mittelstadt and Sandra Wachter: “Generative AI (genAI) technologies, while beneficial, risk increasing discrimination by producing demeaning content and subtle biases through inadequate representation of protected groups. This chapter examines these issues, categorizing problematic outputs into three legal categories: discriminatory content; harassment; and legally hard cases like harmful stereotypes. It argues for holding genAI providers and deployers liable for discriminatory outputs and highlights the inadequacy of traditional legal frameworks to address genAI-specific issues. The chapter suggests updating EU laws to mitigate biases in training and input data, mandating testing and auditing, and evolving legislation to enforce standards for bias mitigation and inclusivity as technology advances…(More)”.

A.I. May Save Us, or May Construct Viruses to Kill Us


Article by Nicholas Kristof: “Here’s a bargain of the most horrifying kind: For less than $100,000, it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.

That’s the conclusion of Jason Matheny, the president of the RAND Corporation, a think tank that studies security matters and other issues.

“It wouldn’t cost more to create a pathogen that’s capable of killing hundreds of millions of people versus a pathogen that’s only capable of killing hundreds of thousands of people,” Matheny told me.

In contrast, he noted, it could cost billions of dollars to produce a new vaccine or antiviral in response…

In the early 2000s, some of us worried about smallpox being reintroduced as a bioweapon if the virus were stolen from the labs in Atlanta and in Russia’s Novosibirsk region that retain the virus since the disease was eradicated. But with synthetic biology, now it wouldn’t have to be stolen.

Some years ago, a research team created a cousin of the smallpox virus, horse pox, in six months for $100,000, and with A.I. it could be easier and cheaper to refine the virus.

One reason biological weapons haven’t been much used is that they can boomerang. If Russia released a virus in Ukraine, it could spread to Russia. But a retired Chinese general has raised the possibility of biological warfare that targets particular races or ethnicities (probably imperfectly), which would make bioweapons much more useful. Alternatively, it might be possible to develop a virus that would kill or incapacitate a particular person, such as a troublesome president or ambassador, if one had obtained that person’s DNA at a dinner or reception.

Assessments of ethnic-targeting research by China are classified, but they may be why the U.S. Defense Department has said that the most important long-term threat of biowarfare comes from China.

A.I. has a more hopeful side as well, of course. It holds the promise of improving education, reducing auto accidents, curing cancers and developing miraculous new pharmaceuticals.

One of the best-known benefits is in protein folding, which can lead to revolutionary advances in medical care. Scientists used to spend years or decades figuring out the shapes of individual proteins, and then a Google initiative called AlphaFold was introduced that could predict the shapes within minutes. “It’s Google Maps for biology,” Kent Walker, president of global affairs at Google, told me.

Scientists have since used updated versions of AlphaFold to work on pharmaceuticals including a vaccine against malaria, one of the greatest killers of humans throughout history.

So it’s unclear whether A.I. will save us or kill us first…(More)”.