The Techlash and Tech Crisis Communication


Book by Nirit Weiss-Blatt: “This book provides an in-depth analysis of the evolution of tech journalism. The emerging tech-backlash is a story of pendulum swings: We are currently in tech-dystopianism after a long period spent in tech-utopianism. Tech companies were used to ‘cheerleading’ coverage of product launches. This long tech-press honeymoon ended, and was replaced by a new era of mounting criticism focused on tech’s negative impact on society. When and why did tech coverage shift? How did tech companies respond to the rise of tech criticism?

The book depicts three main eras: Pre-Techlash, Techlash, and Post-Techlash. The reader is taken on a journey from computer magazines, through tech blogs to the upsurge of tech investigative reporting. It illuminates the profound changes in the power dynamics between the media and the tech giants it covers.

The interplay between tech journalism and tech PR was underexplored. Through analyses of both tech media and the corporates’ crisis responses, this book examines the roots and characteristics of the Techlash, and provides explanations to ‘How did we get here?’. Insightful observations by tech journalists and tech public relations professionals are added to the research data, and together – they tell the story of the TECHLASH. It includes theoretical and practical implications for both tech enthusiasts and critics….(More)”.

Far-right news sources on Facebook more engaging


Study by Laura Edelson, Minh-Kha Nguyen, Ian Goldstein, Oana Goga, Tobias Lauinger, and Damon McCoy: Facebook has become a major way people find news and information in an increasingly politically polarized nation. We analyzed how users interacted with different types of posts promoted as news in the lead-up to and aftermath of the U.S. 2020 elections. We found that politically extreme sources tend to generate more interactions from users. In particular, content from sources rated as far-right by independent news rating services consistently received the highest engagement per follower of any partisan group. Additionally, frequent purveyors of far-right misinformation had on average 65% more engagement per follower than other far-right pages. We found:

  • Sources of news and information rated as far-right generate the highest average number of interactions per follower with their posts, followed by sources from the far-left, and then news sources closer to the center of the political spectrum.
  • Looking at the far-right, misinformation sources far outperform non-misinformation sources. Far-right sources designated as spreaders of misinformation had an average of 426 interactions per thousand followers per week, while non-misinformation sources had an average of 259 weekly interactions per thousand followers.
  • Engagement with posts from far-right and far-left news sources peaked around Election Day and again on January 6, the day of the certification of the electoral count and the U.S. Capitol riot. For posts from all other political leanings of news sources, the increase in engagement was much less intense.
  • Center and left partisan categories incur a misinformation penalty, while right-leaning sources do not. Center sources of misinformation, for example, performed about 70% worse than their non-misinformation counterparts. (Note: center sources of misinformation tend to be sites presenting as health news that have no obvious ideological orientation.)…(More)”.

Liability of online platforms


European Parliament Think Tank: “Given the central role that online platforms (OPs) play in the digital economy, questions arise about their responsibility in relation to illegal/harmful content or products hosted in the frame of their operation. Against this background, this study reviews the main legal/regulatory challenges associated with OP operations and analyses the incentives for OPs, their users and third parties to detect and remove illegal/harmful and dangerous material, content and/or products. To create a functional classification which can be used for regulatory purposes, it discusses the notion of OPs and attempts to categorise them under multiple criteria. The study then maps and critically assesses the whole range of OP liabilities, taking hard and soft law, self-regulation and national legislation into consideration, whenever relevant. Finally, the study puts forward policy options for an efficient EU liability regime: (i) maintaining the status quo; (ii) awareness-raising and media literacy; (iii)promoting self-regulation; (iv) establishing co-regulation mechanisms and tools; (v) adoptingstatutory legislation; (vi) modifying OPs’ secondaryliability by employing two different models – (a) byclarifying the conditions for liability exemptionsprovided by the e-Commerce Directive or (b) byestablishing a harmonised regime of liability….(More)”.

A New Way to Inoculate People Against Misinformation


Article by Jon Roozenbeek, Melisa Basol, and Sander van der Linden: “From setting mobile phone towers on fire to refusing critical vaccinations, we know the proliferation of misinformation online can have massive, real-world consequences.

For those who want to avert those consequences, it makes sense to try and correct misinformation. But as we now know, misinformation—both intentional and unintentional—is difficult to fight once it’s out in the digital wild. The pace at which unverified (and often false) information travels makes any attempt to catch up to, retrieve, and correct it an ambitious endeavour. We also know that viral information tends to stick, that repeated misinformation is more likely to be judged as true, and that people often continue to believe falsehoods even after they have been debunked.

Instead of fighting misinformation after it’s already spread, some researchers have shifted their strategy: they’re trying to prevent it from going viral in the first place, an approach known as “prebunking.” Prebunking attempts to explain how people can resist persuasion by misinformation. Grounded in inoculation theory, the approach uses the analogy of biological immunization. Just as weakened exposure to a pathogen triggers antibody production, inoculation theory posits that pre-emptively exposing people to a weakened persuasive argument builds people’s resistance against future manipulation.

But while inoculation is a promising approach, it has its limitations. Traditional inoculation messages are issue-specific, and have often remained confined to the particular context that you want to inoculate people against. For example, an inoculation message might forewarn people that false information is circulating encouraging people to drink bleach as a cure for the coronavirus. Although that may help stop bleach drinking, this messaging doesn’t pre-empt misinformation about other fake cures. As a result, prebunking approaches haven’t easily adapted to the changing misinformation landscape, making them difficult to scale.

However, our research suggests that there may be another way to inoculate people that preserves the benefits of prebunking: it may be possible to build resistance against misinformation in general, rather than fighting it one piece at a time….(More)”.

The Janus Face of the Liberal International Information Order: When Global Institutions Are Self-Undermining


Paper by Henry Farrell and Abraham L. Newman: “Scholars and policymakers long believed that norms of global information openness and private-sector governance helped to sustain and promote liberalism. These norms are being increasingly contested within liberal democracies. In this article, we argue that a key source of debate over the Liberal International Information Order (LIIO), a sub-order of the Liberal International Order (LIO), is generated internally by “self-undermining feedback effects,” that is, mechanisms through which institutional arrangements undermine their own political conditions of survival over time. Empirically, we demonstrate how global governance of the Internet, transnational disinformation campaigns, and domestic information governance interact to sow the seeds of this contention. In particular, illiberal states converted norms of openness into a vector of attack, unsettling political bargains in liberal states concerning the LIIO. More generally, we set out a broader research agenda to show how the international relations discipline might better understand institutional change as well as the informational aspects of the current crisis in the LIO….(More)”

Twitter’s misinformation problem is much bigger than Trump. The crowd may help solve it.


Elizabeth Dwoskin at the Washington Post: “A pilot program called Birdwatch lets selected users write corrections and fact checks on potentially misleading tweets…

The presidential election is over, but the fight against misinformation continues.

The latest volley in that effort comes from Twitter, which on MondayannouncedBirdwatch, a pilot project that uses crowdsourcing techniques to combat falsehoods and misleading statements on its service.

The pilot, which is open to only about 1,000 select users who can apply to be contributors, will allow people to write notes with corrections and accurate information directly into misleading tweets — a method that has the potential to get quality information to people more quickly than traditional fact-checking. Fact checks that are rated by other contributors as high quality may get bumped up or rewarded with greater visibility.

Birdwatch represents Twitter’s most experimental response to one of the biggest lessons that social media companies drew from the historic events of 2020: that their existing efforts to combat misinformation — including labeling, fact-checking and sometimes removing content — were not enough to prevent falsehoods about a stolen election or the coronavirus from reaching and influencing broad swaths of the population. Researchers who studied enforcement actions by social media companies last year found that fact checks and labels are usually implemented too late, after a post or a tweet has gone viral.

The Birdwatch project — which for the duration of the pilot will function as a separate website — is novel in that it attempts to build new mechanisms into Twitter’s product that foreground fact-checking by its community of 187 million daily users worldwide. Rather than having to comb through replies to tweets to sift through what’s true or false — or having Twitter employees append to a tweet a label providing additional context — users will be able to click on a separate notes folder attached to a tweet where they can see the consensus-driven responses from the community. Twitter will have a team reviewing winning responses to prevent manipulation, though a major question is whether any part of the process will be automated and therefore more easily gamed….(More)”

From Journalistic Ethics To Fact-Checking Practices: Defining The Standards Of Content Governance In The Fight Against Disinformation


Paper by Paolo Cavaliere: “This article claims that the practices undertaken by digital platforms to counter disinformation, under the EU Action Plan against Disinformation and the Code of Practice, mark a shift in the governance of news media content. While professional journalism standards have been used for long, both within and outside the industry, to assess the accuracy of news content and adjudicate on media conduct, the platforms are now resolving to different fact-checking routines to moderate and curate their content.
The article will demonstrate how fact-checking organisations have different working methods than news operators and ultimately understand and assess ‘accuracy’ in different ways. As a result, this new and enhanced role for platforms and fact-checkers as curators of content impacts on how content is distributed to the audience and, thus, on media freedom. Depending on how the fact-checking standards and working routines will consolidate in the near future, however, this trend offers an actual opportunity to improve the quality of news and the right to receive information…(More)”.

Rescuing Our Democracy by Rethinking New York Times Co. v. Sullivan


Paper by David Andrew Logan: “New York Times v. Sullivan (1964) is an iconic decision, foundational to modern First Amendment theory, and in a string of follow-on decisions the Court firmly grounded free speech theory and practice in the need to protect democratic discourse. To do this the Court provided broad and deep protections to the publishers of falsehoods. This article recognizes that New York Times and its progeny made sense in the “public square” of an earlier era, but the justices could never have foreseen the dramatic changes in technology and the media environment in the years since, nor predict that by making defamation cases virtually impossible to win they were harming, rather than helping self-government. In part because of New York Times, the First Amendment has been weaponized, frustrating a basic requirement of a healthy democracy: the development of a set of broadly agreed-upon facts. Instead, we are subject to waves of falsehoods that swamp the ability of citizens to effectively self-govern. As a result, and despite its iconic status, New York Times needs to be reexamined and retooled to better serve our democracy….(More)”

When FOIA Goes to Court: 20 Years of Freedom of Information Act Litigation by News Organizations and Reporters


Report by The FOIA Project: “The news media are powerful players in the world of government transparency and public accountability. One important tool for ensuring public accountability is through invoking transparency mandates provided by the Freedom of Information Act (FOIA). In 2020, news organizations and individual reporters filed 122 different FOIA suits[1] to compel disclosure of federal government records—more than any year on record according to federal court data back to 2001 analyzed by the FOIA Project

In fact, the media alone have filed a total of 386 FOIA cases during the four years of the Trump Administration, from 2017 through 2020. This is greater than the total of 311 FOIA media cases filed during the sixteen years of the Bush and Obama Administrations combined. Moreover, many of these FOIA cases were the very first FOIA cases filed by members of the news media. Almost as many new FOIA litigators filed their first case in court in the past four years—178 from 2017 to 2020—than the years 2001 to 2016, when 196 FOIA litigators filed their first case. Reporters made up the majority of these. During the past four years, more than four out of five of first-time litigators were individual reporters. The ranks of FOIA litigators thus expanded considerably during the Trump Administration, with more reporters challenging agencies in court for failing to provide records they are seeking, either alone or with their news organizations.

Using the FOIA Project’s unique dataset of FOIA cases filed in federal court, this report provides unprecedented and valuable insight into the rapid growth of media lawsuits designed to make the government more transparent and accountable to the public. The complete, updated list of news media cases, along with the names of organizations and reporters who filed these suits, is available on the News Media List at FOIAProject.org. Figure 1shows the total number of FOIA cases filed by the news each year. Counts are available in Appendix Table 1 at the end of this report….(More)”.

Figure 1. Freedom of Information Act (FOIA) Cases Filed by News Organizations and Reporters in Federal Court, 2001–2020.

2020 was the year activists mastered hashtag flooding


Nicole Gallucci at Mashable: “A lone hashtag might not look very mighty, but when used en masse, the symbols can become incredibly powerful activism tools.

Over the past two decades — largely since product designer Chris Messina pitched hashtags to Twitter in 2007 — activists have learned to harness the symbols to form online communities, raise awareness on pressing issues, organize protests, shape digital narratives, and redirect social media discourse. 

On any given day, a series of hashtags are spotlighted in “Trending” section of Twitter. The hashtags featured are those that have gained traction online and reflect topics being heavily discussed in the moment. More often than not, a trending hashtag’s popularity is organic, but a hashtag’s origin and initial purpose can become clouded when people partake in a clever tactic called hashtag flooding.

Hashtag flooding, or the act of hijacking a hashtag on social media platforms to change its meaning, has been around for years. But in 2020, particularly in the months leading up to the presidential election, activists and social media users looking to make their voices heard used the technique to drown out hateful narratives.

From K-pop fans flooding Donald Trump-related hashtags to members of the gay community putting their own spin on the #ProudBoys hashtag, the method of online communication dominated timelines this year and should be in every activist’s playbook….(More)”.