Best Practices to Cover Ad Information Used for Research, Public Health, Law Enforcement & Other Uses


Press Release: “The Network Advertising Initiative (NAI) released privacy Best Practices for its members to follow if they use data collected for Tailored Advertising or Ad Delivery and Reporting for non-marketing purposes, such as sharing with research institutions, public health agencies, or law enforcement entities.

“Ad tech companies have data that can be a powerful resource for the public good if they follow this set of best practices for consumer privacy,” said Leigh Freund, NAI President and CEO. “During the COVID-19 pandemic, we’ve seen the opportunity for substantial public health benefits from sharing aggregate and de-identified location data.”

The NAI Code of Conduct – the industry’s premier self-regulatory framework for privacy, transparency, and consumer choice – covers data collected and used for Tailored Advertising or Ad Delivery and Reporting. The NAI Code has long addressed certain non-marketing uses of data collected for Tailored Advertising and Ad Delivery and Reporting by prohibiting any
eligibility uses of such data, including uses for credit, insurance, healthcare, and employment decisions.

The NAI has always firmly believed that data collected for advertising purposes should not have a negative effect on consumers in their daily lives. However, over the past year, novel data uses have been introduced, especially during the recent health crisis. In the case of opted-in data such as Precise Location Information, a company may determine a user would benefit from more detailed disclosure in a just-in-time notice about non-marketing uses of the data being collected….(More)”.

Disinformation Tracker


Press Release: “Today, Global Partners Digital (GPD), ARTICLE 19, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), PROTEGE QV and  the Centre for Human Rights of the University of Pretoria jointly launched an interactive map to track and analyse disinformation laws, policies and patterns of enforcement across Sub-Saharan Africa.

The map offers a birds-eye view of trends in state responses to disinformation across the region, as well as in-depth analysis of the state of play in individual countries, using a bespoke framework to assess whether laws, policies and other state responses are human rights-respecting. 

Developed against a backdrop of rapidly accelerating state action on COVID-19 related disinformation, the map is an open, iterative product. At the time of launch, it covers 31 countries (see below for the full list), with an aim to expand this in the coming months. All data, analysis and insight on the map has been generated by groups and actors based in Africa….(More)”.

Terms of Disservice: How Silicon Valley is Destructive by Design


Book by Dipayan Ghosh on “Designing a new digital social contact for our technological future…High technology presents a paradox. In just a few decades, it has transformed the world, making almost limitless quantities of information instantly available to billions of people and reshaping businesses, institutions, and even entire economies. But it also has come to rule our lives, addicting many of us to the march of megapixels across electronic screens both large and small.

Despite its undeniable value, technology is exacerbating deep social and political divisions in many societies. Elections influenced by fake news and unscrupulous hidden actors, the cyber-hacking of trusted national institutions, the vacuuming of private information by Silicon Valley behemoths, ongoing threats to vital infrastructure from terrorist groups and even foreign governments—all these concerns are now part of the daily news cycle and are certain to become increasingly serious into the future.

In this new world of endless technology, how can individuals, institutions, and governments harness its positive contributions while protecting each of us, no matter who or where we are?

In this book, a former Facebook public policy adviser who went on to assist President Obama in the White House offers practical ideas for using technology to create an open and accessible world that protects all consumers and civilians. As a computer scientist turned policymaker, Dipayan Ghosh answers the biggest questions about technology facing the world today. Proving clear and understandable explanations for complex issues, Terms of Disservice will guide industry leaders, policymakers, and the general public as we think about how we ensure that the Internet works for everyone, not just Silicon Valley….(More)”.

The technology of witnessing brutality


Axios: “The ways Americans capture and share records of racist violence and police misconduct keep changing, but the pain of the underlying injustices they chronicle remains a stubborn constant.

Driving the news: After George Floyd’s death at the hands of Minneapolis police sparked wide protests, Minnesota Gov. Tim Walz said, “Thank God a young person had a camera to video it.”

Why it matters: From news photography to TV broadcasts to camcorders to smartphones, improvements in the technology of witness over the past century mean we’re more instantly and viscerally aware of each new injustice.

  • But unless our growing power to collect and distribute evidence of injustice can drive actual social change, the awareness these technologies provide just ends up fueling frustration and despair.

For decades, still news photography was the primary channel through which the public became aware of incidents of racial injustice.

  • horrific 1930 photo of the lynching of J. Thomas Shipp and Abraham S. Smith, two black men in Marion, Indiana, brought the incident to national attention and inspired the song “Strange Fruit.” But the killers were never brought to justice.
  • Photos of the mutilated body of Emmett Till catalyzed a nationwide reaction to his 1955 lynching in Mississippi.

In the 1960s, television news footage brought scenes of police turning dogs and water cannons on peaceful civil rights protesters in Birmingham and Selma, Alabama into viewers’ living rooms.

  • The TV coverage was moving in both senses of the word.

In 1991, a camcorder tape shot by a Los Angeles plumber named George Holliday captured images of cops brutally beating Rodney King.

  • In the pre-internet era, it was only after the King tape was broadcast on TV that Americans could see it for themselves.

Over the past decade, smartphones have enabled witnesses and protesters to capture and distribute photos and videos of injustice quickly — sometimes, as it’s happening.

  • This power helped catalyze the Black Lives Matter movement beginning in 2013 and has played a growing role in broader public awareness of police brutality.

Between the lines: For a brief moment mid-decade, some hoped that the combination of a public well-supplied with video recording devices and requirements that police wear bodycams would introduce a new level of accountability to law enforcement.

The bottom line: Smartphones and social media deliver direct accounts of grief- and rage-inducing stories…(More)”.

Considering the Source: Varieties of COVID-19 Information


Congressional Research Service: “In common parlance, the terms propaganda, misinformation, and disinformation are often used interchangeably, often with connotations of deliberate untruths of nefarious origin. In a national security context, however, these terms refer to categories of information that are created and disseminated with different intent and serve different strategic purposes. This primer examines these categories to create a framework for understanding the national security implications of information related to the Coronavirus Disease 2019 (COVID-19) pandemic….(More)”.

Misinformation During a Pandemic


Paper by Leonardo Bursztyn et al: “We study the effects of news coverage of the novel coronavirus by the two most widely-viewed cable news shows in the United States – Hannity and Tucker Carlson Tonight, both on Fox News – on viewers’ behavior and downstream health outcomes. Carlson warned viewers about the threat posed by the coronavirus from early February, while Hannity originally dismissed the risks associated with the virus before gradually adjusting his position starting late February. We first validate these differences in content with independent coding of show transcripts. In line with the differences in content, we present novel survey evidence that Hannity’s viewers changed behavior in response to the virus later than other Fox News viewers, while Carlson’s viewers changed behavior earlier. We then turn to the effects on the pandemic itself, examining health outcomes across counties.

First, we document that greater viewership of Hannity relative to Tucker Carlson Tonight is strongly associated with a greater number of COVID-19 cases and deaths in the early stages of the pandemic. The relationship is stable across an expansive set of robustness tests. To better identify the effect of differential viewership of the two shows, we employ a novel instrumental variable strategy exploiting variation in when shows are broadcast in relation to local sunset times. These estimates also show that greater exposure to Hannity relative to Tucker Carlson Tonight is associated with a greater number of county-level cases and deaths. Furthermore, the results suggest that in mid-March, after Hannity’s shift in tone, the diverging trajectories on COVID-19 cases begin to revert. We provide additional evidence consistent with misinformation being an important mechanism driving the effects in the data. While our findings cannot yet speak to long-term effects, they indicate that provision of misinformation in the early stages of a pandemic can have important consequences for how a disease ultimately affects the population….(More)”.

Researchers Develop Faster Way to Replace Bad Data With Accurate Information


NCSU Press Release: “Researchers from North Carolina State University and the Army Research Office have demonstrated a new model of how competing pieces of information spread in online social networks and the Internet of Things (IoT). The findings could be used to disseminate accurate information more quickly, displacing false information about anything from computer security to public health….

In their paper, the researchers show that a network’s size plays a significant role in how quickly “good” information can displace “bad” information. However, a large network is not necessarily better or worse than a small one. Instead, the speed at which good data travels is primarily affected by the network’s structure.

A highly interconnected network can disseminate new data very quickly. And the larger the network, the faster the new data will travel.

However, in networks that are connected primarily by a limited number of key nodes, those nodes serve as bottlenecks. As a result, the larger this type of network is, the slower the new data will travel.

The researchers also identified an algorithm that can be used to assess which point in a network would allow you to spread new data throughout the network most quickly.

“Practically speaking, this could be used to ensure that an IoT network purges old data as quickly as possible and is operating with new, accurate data,” Wenye Wang says.

“But these findings are also applicable to online social networks, and could be used to facilitate the spread of accurate information regarding subjects that affect the public,” says Jie Wang. “For example, we think it could be used to combat misinformation online.”…(More)”

Full paper: “Modeling and Analysis of Conflicting Information Propagation in a Finite Time Horizon,”

The Law and Economics of Online Republication


Paper by Ronen Perry: “Jerry publishes unlawful content about Newman on Facebook, Elaine shares Jerry’s post, the share automatically turns into a tweet because her Facebook and Twitter accounts are linked, and George immediately retweets it. Should Elaine and George be liable for these republications? The question is neither theoretical nor idiosyncratic. On occasion, it reaches the headlines, as when Jennifer Lawrence’s representatives announced she would sue every person involved in the dissemination, through various online platforms, of her illegally obtained nude pictures. Yet this is only the tip of the iceberg. Numerous potentially offensive items are reposted daily, their exposure expands in widening circles, and they sometimes “go viral.”

This Article is the first to provide a law and economics analysis of the question of liability for online republication. Its main thesis is that liability for republication generates a specter of multiple defendants which might dilute the originator’s liability and undermine its deterrent effect. The Article concludes that, subject to several exceptions and methodological caveats, only the originator should be liable. This seems to be the American rule, as enunciated in Batzel v. Smith and Barrett v. Rosenthal. It stands in stark contrast to the prevalent rules in other Western jurisdictions and has been challenged by scholars on various grounds since its very inception.

The Article unfolds in three Parts. Part I presents the legal framework. It first discusses the rules applicable to republication of self-created content, focusing on the emergence of the single publication rule and its natural extension to online republication. It then turns to republication of third-party content. American law makes a clear-cut distinction between offline republication which gives rise to a new cause of action against the republisher (subject to a few limited exceptions), and online republication which enjoys an almost absolute immunity under § 230 of the Communications Decency Act. Other Western jurisdictions employ more generous republisher liability regimes, which usually require endorsement, a knowing expansion of exposure or repetition.

Part II offers an economic justification for the American model. Law and economics literature has showed that attributing liability for constant indivisible harm to multiple injurers, where each could have single-handedly prevented that harm (“alternative care” settings), leads to dilution of liability. Online republication scenarios often involve multiple tortfeasors. However, they differ from previously analyzed phenomena because they are not alternative care situations, and because the harm—increased by the conduct of each tortfeasor—is not constant and indivisible. Part II argues that neither feature precludes the dilution argument. It explains that the impact of the multiplicity of injurers in the online republication context on liability and deterrence provides a general justification for the American rule. This rule’s relatively low administrative costs afford additional support.

Part III considers the possible limits of the theoretical argument. It maintains that exceptions to the exclusive originator liability rule should be recognized when the originator is unidentifiable or judgment-proof, and when either the republisher’s identity or the republication’s audience was unforeseeable. It also explains that the rule does not preclude liability for positive endorsement with a substantial addition, which constitutes a new original publication, or for the dissemination of illegally obtained content, which is an independent wrong. Lastly, Part III addresses possible challenges to the main argument’s underlying assumptions, namely that liability dilution is a real risk and that it is undesirable….(More)”.

Automation in Moderation


Article by Hannah Bloch-Wehba: “This Article assesses recent efforts to compel or encourage online platforms to use automated means to prevent the dissemination of unlawful online content before it is ever seen or distributed. As lawmakers in Europe and around the world closely scrutinize platforms’ “content moderation” practices, automation and artificial intelligence appear increasingly attractive options for ridding the Internet of many kinds of harmful online content, including defamation, copyright infringement, and terrorist speech. Proponents of these initiatives suggest that requiring platforms to screen user content using automation will promote healthier online discourse and will aid efforts to limit Big Tech’s power.

In fact, however, the regulations that incentivize platforms to use automation in content moderation come with unappreciated costs for civil liberties and unexpected benefits for platforms. The new automation techniques exacerbate existing risks to free speech and user privacy and create ripe new sources of information for surveillance, aggravating threats to free expression, associational rights, religious freedoms, and equality. Automation also worsens transparency and accountability deficits. Far from curtailing private power, the new regulations endorse and expand platform authority to police online speech, with little in the way of oversight and few countervailing checks. New regulations of online intermediaries should therefore incorporate checks on the use of automation to avoid exacerbating these dynamics. Carefully drawn transparency obligations, algorithmic accountability mechanisms, and procedural safeguards can help to ameliorate the effects of these regulations on users and competition…(More)”.

Beyond Takedown: Expanding the Toolkit for Responding to Online Hate


Paper by Molly K. Land and Rebecca J. Hamilton: “The current preoccupation with ‘fake news’ has spurred a renewed emphasis in popular discourse on the potential harms of speech. In the world of international law, however, ‘fake news’ is far from new. Propaganda of various sorts is a well-worn tactic of governments, and in its most insidious form, it has played an instrumental role in inciting and enabling some of the worst atrocities of our time. Yet as familiar as propaganda might be in theory, it is raising new issues as it has migrated to the digital realm. Technological developments have largely outpaced existing legal and political tools for responding to the use of mass communications devices to instigate or perpetrate human rights violations.

This chapter evaluates the current practices of social media companies for responding to online hate, arguing that they are inevitably both overbroad and under-inclusive. Using the example of the role played by Facebook in the recent genocide against the minority Muslim Rohingya population in Myanmar, the chapter illustrates the failure of platform hate speech policies to address pervasive and coordinated online speech, often state-sponsored or state-aligned, denigrating a particular group that is used to justify or foster impunity for violence against that group. Addressing this “conditioning speech” requires a more tailored response that includes remedies other than content removal and account suspensions. The chapter concludes by surveying a range of innovative responses to harmful online content that would give social media platforms the flexibly to intervene earlier, but with a much lighter touch….(More)”.