Roadside safety messages increase crashes by distracting drivers


Article by Jonathan Hall and Joshua Madsen: “Behavioural interventions involve gently suggesting that people reconsider or change specific undesirable behaviours. They are a low-cost, easy-to-implement and increasingly common tool used by policymakers to encourage socially desirable behaviours.

Examples of behavioural interventions include telling people how their electricity usage compares to their neighbours or sending text messages reminding people to pay fines.

Many of these interventions are expressly designed to “seize people’s attention” at a time when they can take the desired action. Unfortunately, seizing people’s attention can crowd out other, more important considerations, and cause even a simple intervention to backfire with costly individual and social consequences.

One such behavioural intervention struck us as odd: Several U.S. states display year-to-date fatality statistics (number of deaths) on roadside dynamic message signs (DMSs). The hope is that these sobering messages will reduce traffic crashesa leading cause of death of five- to 29-year-olds worldwide. Perhaps because of its low cost and ease of implementation, at least 28 U.S. states have displayed fatality statistics at least once since 2012. We estimate that approximately 90 million drivers have been exposed to such messages.

a road sign saying 1669 DEATHS THIS YEAR ON TEXAS ROADS
A roadside dynamic messaging sign in Texas, displaying the death toll from road crashes. (Jonathan Hall), Author provided

Startling results

As academic researchers with backgrounds in information disclosure and transportation policy, we teamed up to investigate and quantify the effects of these messages. What we found startled us.

Contrary to policymakers’ expectations (and ours), we found that displaying fatality messages increases the number of crashes…(More)”.

Data saves lives: reshaping health and social care with data


UK Government Policy Paper: “…Up-to-date information about our health and care is critical to ensuring we can:

  • plan and commission services that provide what each local area needs and support effective integrated care systems
  • develop new diagnostics, treatments and insights from analysing information so the public have the best possible care and can improve their overall wellbeing
  • stop asking the public to repeat their information unnecessarily by having it available at the right time
  • assess the safety and quality of care to keep the public safe, both for their individual care and to improve guidance and regulations
  • better manage public health issues such as COVID-19, health and care disparities, and sexual health
  • help the public make informed decisions about their care, including choosing clinicians, such as through patient-reported outcome measures (PROMs) that assess the quality of care delivered from a patient’s perspective

When it comes to handling personal data, the NHS has become one of the most trusted organisations in the UK by using strict legal, privacy and security controls. Partly as a consequence of this track record, the National Data Guardian’s recent report Putting Good Into Practice found that participants were supportive of health and social care data being used for public benefit. This reflects previous polls, which show most respondents would trust the NHS with data about them (57% in July 2020 and 59% in February 2020).

During the pandemic, we made further strides in harnessing the power of data:

However, we cannot take the trust of the public for granted. In the summer of 2021, we made a mistake and did not do enough to explain the improvements needed to the way we collect general practice data. The reasons for these changes are to improve data quality, and improve the understanding of the health and care system so it can plan better and provide more targeted services. We also need to do this in a more cost-effective way as the current system using ad hoc collection processes is more expensive and inefficient, and has been criticised by the National Audit Office and the House of Commons Public Accounts Committee.

Not only did we insufficiently explain, we also did not listen and engage well enough. This led to confusion and anxiety, and created a perception that we were willing to press ahead regardless. This had the unfortunate consequence of leading to an increase in the rate of individuals opting out of sharing their data. Of course, individual members of the public have the right to opt out and always will. But the more people who opt out, the greater the risk that the quality of the data is compromised….

In this data strategy, which differs from the draft we published last year, we are putting public trust and confidence front and centre of the safe use and access to health and social care data. The data we talk about is not an abstract thing: there is an individual, a person, a name behind each piece of data. That demands the highest level of confidence. It is their data that we hold in trust and, in return, promise to use safely to provide high-quality care, help improve our NHS and adult social care, develop new treatments, and, as a result, save lives…(More)”

Imagining Governance for Emerging Technologies


Essay by Debra J.H. Mathews, Rachel Fabi and Anaeze C. Offodile: “…How should such technologies be regulated and governed? It is increasingly clear that past governance structures and strategies are not up to the task. What these technologies require is a new governance approach that accounts for their interdisciplinary impacts and potential for both good and ill at both the individual and societal level. 

To help lay the groundwork for a novel governance framework that will enable policymakers to better understand these technologies’ cross-sectoral footprint and anticipate and address the social, legal, ethical, and governance issues they raise, our team worked under the auspices of the National Academy of Medicine’s Committee on Emerging Science, Technology, and Innovation in health and medicine (CESTI) to develop an analytical approach to technology impacts and governance. The approach is grounded in detailed case studies—including the vignettes about Robyn and Liam—which have informed the development of a set of guiding principles (see sidebar).

Based on careful analysis of past governance, these case studies also contain a plausible vision of what might happen in the future. They illuminate ethical issues and help reveal governance tools and choices that could be crucial to delivering social benefits and reducing or avoiding harms. We believe that the approach taken by the committee will be widely applicable to considering the governance of emerging health technologies. Our methodology and process, as we describe here, may also be useful to a range of stakeholders involved in governance issues like these…(More)”.

A Future Built on Data: Data Strategies, Competitive Advantage and Trust


Paper by Susan Ariel Aaronson: “In the twenty-first century, data became the subject of national strategy. This paper examines these visions and strategies to better understand what policy makers hope to achieve. Data is different from other inputs: it is plentiful, easy to use and can be utilized and shared by many different people without being used up. Moreover, data can be simultaneously a commercial asset and a public good. Various types of data can be analyzed to create new products and services or to mitigate complex “wicked” problems that transcend generations and nations (a public good function). However, an economy built on data analysis also brings problems — firms and governments can manipulate or misuse personal data, and in so doing undermine human autonomy and human rights. Given the complicated nature of data and its various types (for example, personal, proprietary, public, and so on), a growing number of governments have decided to outline how they see data’s role in the economy and polity. While it is too early to evaluate the effectiveness of these strategies, policy makers increasingly recognize that if they want to build their country’s future on data, they must also focus on trust….(More)”.

Against Progress: Intellectual Property and Fundamental Values in the Internet Age


Book by Jessica Silbey: “When first written into the Constitution, intellectual property aimed to facilitate “progress of science and the useful arts” by granting rights to authors and inventors. Today, when rapid technological evolution accompanies growing wealth inequality and political and social divisiveness, the constitutional goal of “progress” may pertain to more basic, human values, redirecting IP’s emphasis to the commonweal instead of private interests. Against Progress considers contemporary debates about intellectual property law as concerning the relationship between the constitutional mandate of progress and fundamental values, such as equality, privacy, and distributive justice, that are increasingly challenged in today’s internet age. Following a legal analysis of various intellectual property court cases, Jessica Silbey examines the experiences of everyday creators and innovators navigating ownership, sharing, and sustainability within the internet eco-system and current IP laws. Crucially, the book encourages refiguring the substance of “progress” and the function of intellectual property in terms that demonstrate the urgency of art and science to social justice today…(More)”.

Dynamic World


About: “The real world is as dynamic as the people and natural processes that shape it. Dynamic World is a near realtime 10m resolution global land use land cover dataset, produced using deep learning, freely available and openly licensed. It is the result of a partnership between Google and the World Resources Institute, to produce a dynamic dataset of the physical material on the surface of the Earth. Dynamic World is intended to be used as a data product for users to add custom rules with which to assign final class values, producing derivative land cover maps.

Key innovations of Dynamic World

  1. Near realtime data. Over 5000 Dynamic World image are produced every day, whereas traditional approaches to building land cover data can take months or years to produce. As a result of leveraging a novel deep learning approach, based on Sentinel-2 Top of Atmosphere, Dynamic World offers global land cover updating every 2-5 days depending on location.
  2. Per-pixel probabilities across 9 land cover classes. A major benefit of an AI-powered approach is the model looks at an incoming Sentinel-2 satellite image and, for every pixel in the image, estimates the degree of tree cover, how built up a particular area is, or snow coverage if there’s been a recent snowstorm, for example.
  3. Ten meter resolution. As a result of the European Commission’s Copernicus Programme making European Space Agency Sentinel data freely and openly available, products like Dynamic World are able to offer 10m resolution land cover data. This is important because quantifying data in higher resolution produces more accurate results for what’s really on the surface of the Earth…(More)”.

Global Struggle Over AI Surveillance


Report by the National Endowment for Democracy: “From cameras that identify the faces of passersby to algorithms that keep tabs on public sentiment online, artificial intelligence (AI)-powered tools are opening new frontiers in state surveillance around the world. Law enforcement, national security, criminal justice, and border management organizations in every region are relying on these technologies—which use statistical pattern recognition, machine learning, and big data analytics—to monitor citizens.

What are the governance implications of these enhanced surveillance capabilities?

This report explores the challenge of safeguarding democratic principles and processes as AI technologies enable governments to collect, process, and integrate unprecedented quantities of data about the online and offline activities of individual citizens. Three complementary essays examine the spread of AI surveillance systems, their impact, and the transnational struggle to erect guardrails that uphold democratic values.

In the lead essay, Steven Feldstein, a senior fellow at the Carnegie Endowment for International Peace, assesses the global spread of AI surveillance tools and ongoing efforts at the local, national, and multilateral levels to set rules for their design, deployment, and use. It gives particular attention to the dynamics in young or fragile democracies and hybrid regimes, where checks on surveillance powers may be weakened but civil society still has space to investigate and challenge surveillance deployments.

Two case studies provide more granular depictions of how civil society can influence this norm-shaping process: In the first, Eduardo Ferreyra of Argentina’s Asociación por los Derechos Civiles discusses strategies for overcoming common obstacles to research and debate on surveillance systems. In the second, Danilo Krivokapic of Serbia’s SHARE Foundation describes how his organization drew national and global attention to the deployment of Huawei smart cameras in Belgrade…(More)”.

Americans’ Views of Government: Decades of Distrust, Enduring Support for Its Role


Pew Research: “Americans remain deeply distrustful of and dissatisfied with their government. Just 20% say they trust the government in Washington to do the right thing just about always or most of the time – a sentiment that has changed very little since former President George W. Bush’s second term in office.

Chart shows low public trust in federal government has persisted for nearly two decades

The public’s criticisms of the federal government are many and varied. Some are familiar: Just 6% say the phrase “careful with taxpayer money” describes the federal government extremely or very well; another 21% say this describes the government somewhat well. A comparably small share (only 8%) describes the government as being responsive to the needs of ordinary Americans.

The federal government gets mixed ratings for its handling of specific issues. Evaluations are highly positive in some respects, including for responding to natural disasters (70% say the government does a good job of this) and keeping the country safe from terrorism (68%). However, only about a quarter of Americans say the government has done a good job managing the immigration system and helping people get out of poverty (24% each). And the share giving the government a positive rating for strengthening the economy has declined 17 percentage points since 2020, from 54% to 37%.

Yet Americans’ unhappiness with government has long coexisted with their continued support for government having a substantial role in many realms. And when asked how much the federal government does to address the concerns of various groups in the United States, there is a widespread belief that it does too little on issues affecting many of the groups asked about, including middle-income people (69%), those with lower incomes (66%) and retired people (65%)…(More)”.

Aligning Artificial Intelligence with Humans through Public Policy



Paper by John Nay and James Daily: “Given that Artificial Intelligence (AI) increasingly permeates our lives, it is critical that we systematically align AI objectives with the goals and values of humans. The human-AI alignment problem stems from the impracticality of explicitly specifying the rewards that AI models should receive for all the actions they could take in all relevant states of the world. One possible solution, then, is to leverage the capabilities of AI models to learn those rewards implicitly from a rich source of data describing human values in a wide range of contexts. The democratic policy-making process produces just such data by developing specific rules, flexible standards, interpretable guidelines, and generalizable precedents that synthesize citizens’ preferences over potential actions taken in many states of the world. Therefore, computationally encoding public policies to make them legible to AI systems should be an important part of a socio-technical approach to the broader human-AI alignment puzzle. Legal scholars are exploring AI, but most research has focused on how AI systems fit within existing law, rather than how AI may understand the law. This Essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks. As a demonstration of the ability of AI to comprehend policy, we provide a case study of an AI system that predicts the relevance of proposed legislation to a given publicly traded company and its likely effect on that company. We believe this represents the “comprehension” phase of AI and policy, but leveraging policy as a key source of human values to align AI requires “understanding” policy. We outline what we believe will be required to move toward that, and two example research projects in that direction. Solving the alignment problem is crucial to ensuring that AI is beneficial both individually (to the person or group deploying the AI) and socially. As AI systems are given increasing responsibility in high-stakes contexts, integrating democratically-determined policy into those systems could align their behavior with human goals in a way that is responsive to a constantly evolving society…(More)”.

How harmful is social media?


Gideon Lewis-Kraus in The New Yorker: “In April, the social psychologist Jonathan Haidt published an essay in The Atlantic in which he sought to explain, as the piece’s title had it, “Why the Past 10 Years of American Life Have Been Uniquely Stupid.” Anyone familiar with Haidt’s work in the past half decade could have anticipated his answer: social media. Although Haidt concedes that political polarization and factional enmity long predate the rise of the platforms, and that there are plenty of other factors involved, he believes that the tools of virality—Facebook’s Like and Share buttons, Twitter’s Retweet function—have algorithmically and irrevocably corroded public life. He has determined that a great historical discontinuity can be dated with some precision to the period between 2010 and 2014, when these features became widely available on phones….

After Haidt’s piece was published, the Google Doc—“Social Media and Political Dysfunction: A Collaborative Review”—was made available to the public. Comments piled up, and a new section was added, at the end, to include a miscellany of Twitter threads and Substack essays that appeared in response to Haidt’s interpretation of the evidence. Some colleagues and kibbitzers agreed with Haidt. But others, though they might have shared his basic intuition that something in our experience of social media was amiss, drew upon the same data set to reach less definitive conclusions, or even mildly contradictory ones. Even after the initial flurry of responses to Haidt’s article disappeared into social-media memory, the document, insofar as it captured the state of the social-media debate, remained a lively artifact.

Near the end of the collaborative project’s introduction, the authors warn, “We caution readers not to simply add up the number of studies on each side and declare one side the winner.” The document runs to more than a hundred and fifty pages, and for each question there are affirmative and dissenting studies, as well as some that indicate mixed results. According to one paper, “Political expressions on social media and the online forum were found to (a) reinforce the expressers’ partisan thought process and (b) harden their pre-existing political preferences,” but, according to another, which used data collected during the 2016 election, “Over the course of the campaign, we found media use and attitudes remained relatively stable. Our results also showed that Facebook news use was related to modest over-time spiral of depolarization. Furthermore, we found that people who use Facebook for news were more likely to view both pro- and counter-attitudinal news in each wave. Our results indicated that counter-attitudinal exposure increased over time, which resulted in depolarization.” If results like these seem incompatible, a perplexed reader is given recourse to a study that says, “Our findings indicate that political polarization on social media cannot be conceptualized as a unified phenomenon, as there are significant cross-platform differences.”…(More)”.