‘Belonging Is Stronger Than Facts’: The Age of Misinformation


Max Fisher at the New York Times: “There’s a decent chance you’ve had at least one of these rumors, all false, relayed to you as fact recently: that President Biden plans to force Americans to eat less meat; that Virginia is eliminating advanced math in schools to advance racial equality; and that border officials are mass-purchasing copies of Vice President Kamala Harris’s book to hand out to refugee children.

All were amplified by partisan actors. But you’re just as likely, if not more so, to have heard it relayed from someone you know. And you may have noticed that these cycles of falsehood-fueled outrage keep recurring.

We are in an era of endemic misinformation — and outright disinformation. Plenty of bad actors are helping the trend along. But the real drivers, some experts believe, are social and psychological forces that make people prone to sharing and believing misinformation in the first place. And those forces are on the rise.

“Why are misperceptions about contentious issues in politics and science seemingly so persistent and difficult to correct?” Brendan Nyhan, a Dartmouth College political scientist, posed in a new paper in Proceedings of the National Academy of Sciences.

It’s not for want of good information, which is ubiquitous. Exposure to good information does not reliably instill accurate beliefs anyway. Rather, Dr. Nyhan writes, a growing body of evidence suggests that the ultimate culprits are “cognitive and memory limitations, directional motivations to defend or support some group identity or existing belief, and messages from other people and political elites.”

Put more simply, people become more prone to misinformation when three things happen. First, and perhaps most important, is when conditions in society make people feel a greater need for what social scientists call ingrouping — a belief that their social identity is a source of strength and superiority, and that other groups can be blamed for their problems.

As much as we like to think of ourselves as rational beings who put truth-seeking above all else, we are social animals wired for survival. In times of perceived conflict or social change, we seek security in groups. And that makes us eager to consume information, true or not, that lets us see the world as a conflict putting our righteous ingroup against a nefarious outgroup….(More)”.

Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation


Paper by Zach Bastick: “A growing literature is emerging on the believability and spread of disinformation, such as fake news, over social networks. However, little is known about the degree to which malicious actors can use social media to covertly affect behavior with disinformation. A lab-based randomized controlled experiment was conducted with 233 undergraduate students to investigate the behavioral effects of fake news. It was found that even short (under 5-min) exposure to fake news was able to significantly modify the unconscious behavior of individuals. This paper provides initial evidence that fake news can be used to covertly modify behavior, it argues that current approaches to mitigating fake news, and disinformation in general, are insufficient to protect social media users from this threat, and it highlights the implications of this for democracy. It raises the need for an urgent cross-sectoral effort to investigate, protect against, and mitigate the risks of covert, widespread and decentralized behavior modification over online social networks….(More)”

You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape


Book by Whitney Phillips and Ryan M. Milner: “Our media environment is in crisis. Polarization is rampant. Polluted information floods social media. Even our best efforts to help clean up can backfire, sending toxins roaring across the landscape. In You Are Here, Whitney Phillips and Ryan Milner offer strategies for navigating increasingly treacherous information flows. Using ecological metaphors, they emphasize how our individual me is entwined within a much larger we, and how everyone fits within an ever-shifting network map.

Phillips and Milner describe how our poisoned media landscape came into being, beginning with the Satanic Panics of the 1980s and 1990s—which, they say, exemplify “network climate change”—and proceeding through the emergence of trolling culture and the rise of the reactionary far right (as well as its amplification by journalists) during and after the 2016 election. They explore the history of conspiracy theories in the United States, focusing on those concerning the Deep State; explain why old media literacy solutions fail to solve new media literacy problems; and suggest how we can navigate the network crisis more thoughtfully, effectively, and ethically. We need a network ethics that looks beyond the messages and the messengers to investigate toxic information’s downstream effects….(More)”.

How We Built a Facebook Feed Viewer


Citizen Browser at The MarkUp: “Our interactive dashboard, Split Screen, gives readers a peek into the content Facebook delivered to people of different demographic backgrounds and voting preferences who participated in our Citizen Browser project. 

Using Citizen Browser, our custom Facebook inspector, we perform daily captures of Facebook data from paid panelists. These captures collect the content that was displayed on their Facebook feeds at the moment the app performed its automated capture. From Dec. 1, 2020, to March 2, 2021, 2,601 paid participants have contributed their data to the project. 

To measure what Facebook’s recommendation algorithm displays to different groupings of people, we compare data captured from each over a two-week period. We look at three different pairings:

  • Women vs. Men
  • Biden Voters vs. Trump Voters
  • Millennials vs. Boomers 

We labeled our panelists based on their self-disclosed political leanings, gender, and age. We describe each pairing in more detail in the Pairings section of this article. 

For each pair, we examine four types of content served by Facebook: news sources, posts with news links, hashtags, and group recommendations. We compare the percentage of each grouping that was served each piece of content to that of the other grouping in the pair.  

For more information on the data we collect, the panel’s demographic makeup, and the extensive redaction process we undertake to preserve privacy, see our methodology How We Built a Facebook Inspector.

Our observations should not be taken as proof of Facebook’s choosing to target specific content at specific demographic groups. There are many factors that influence any given person’s feed that we do not account for, including users’ friends and social networks….(More)”.

Using FOIA logs to develop news stories


Yilun Cheng at MuckRock: “In the fiscal year 2020, federal agencies received a total of 790,772 Freedom of Information Act (FOIA) requests. There are also tens of thousands of state and local agencies taking in and processing public record requests on a daily basis. Since most agencies keep a log of requests received, FOIA-minded reporters can find interesting story ideas by asking for and digging through the history of what other people are looking to obtain.

Some FOIA logs are posted on the websites of agencies that proactively release these records. Those that are not can be obtained through a FOIA request. There are a number of online resources that collect and store these documents, including MuckRockthe Black VaultGovernment Attic and FOIA Land.

Sorting through a FOIA log can be challenging since format differs from agency to agency. A more well-maintained log might include comprehensive information on the names of the requesters, the records being asked for, the dates of the requests’ receipt and the agency’s responses, as shown, for example, in a log released by the U.S. Department of Health and Human Services Agency.https://www.documentcloud.org/documents/20508483/annotations/2024702

But other departments –– the Cook County Department of Public Health, for instance –– might only send over a three-column spreadsheet with no descriptions of the nature of the requests.https://www.documentcloud.org/documents/20491259/annotations/2024703

As a result, learning how to negotiate with agencies and interpreting the content in their FOIA logs are crucial for journalists trying to understand the public record landscape. While some reporters only use FOIA logs to keep tabs on their competitors’ reporting interests, the potential of these documents goes far beyond this. Below are some tips for getting story inspiration from FOIA logs….(More)”.

Negligence, Not Politics, Drives Most Misinformation Sharing


John Timmer at Wired: “…a small international team of researchers… decided to take a look at how a group of US residents decided on which news to share. Their results suggest that some of the standard factors that people point to when explaining the tsunami of misinformation—inability to evaluate information and partisan biases—aren’t having as much influence as most of us think. Instead, a lot of the blame gets directed at people just not paying careful attention.

The researchers ran a number of fairly similar experiments to get at the details of misinformation sharing. This involved panels of US-based participants recruited either through Mechanical Turk or via a survey population that provided a more representative sample of the US. Each panel had several hundred to over 1,000 individuals, and the results were consistent across different experiments, so there was a degree of reproducibility to the data.

To do the experiments, the researchers gathered a set of headlines and lead sentences from news stories that had been shared on social media. The set was evenly mixed between headlines that were clearly true and clearly false, and each of these categories was split again between those headlines that favored Democrats and those that favored Republicans.

One thing that was clear is that people are generally capable of judging the accuracy of the headlines. There was a 56 percentage point gap between how often an accurate headline was rated as true and how often a false headline was. People aren’t perfect—they still got things wrong fairly often—but they’re clearly quite a bit better at this than they’re given credit for.

The second thing is that ideology doesn’t really seem to be a major factor in driving judgements on whether a headline was accurate. People were more likely to rate headlines that agreed with their politics, but the difference here was only 10 percentage points. That’s significant (both societally and statistically), but it’s certainly not a large enough gap to explain the flood of misinformation.

But when the same people were asked about whether they’d share these same stories, politics played a big role, and the truth receded. The difference in intention to share between true and false headlines was only 6 percentage points. Meanwhile the gap between whether a headline agreed with a person’s politics or not saw a 20 percentage point gap. Putting it in concrete terms, the authors look at the false headline “Over 500 ‘Migrant Caravaners’ Arrested With Suicide Vests.” Only 16 percent of conservatives in the survey population rated it as true. But over half of them were amenable to sharing it on social media….(More)”.

The Techlash and Tech Crisis Communication


Book by Nirit Weiss-Blatt: “This book provides an in-depth analysis of the evolution of tech journalism. The emerging tech-backlash is a story of pendulum swings: We are currently in tech-dystopianism after a long period spent in tech-utopianism. Tech companies were used to ‘cheerleading’ coverage of product launches. This long tech-press honeymoon ended, and was replaced by a new era of mounting criticism focused on tech’s negative impact on society. When and why did tech coverage shift? How did tech companies respond to the rise of tech criticism?

The book depicts three main eras: Pre-Techlash, Techlash, and Post-Techlash. The reader is taken on a journey from computer magazines, through tech blogs to the upsurge of tech investigative reporting. It illuminates the profound changes in the power dynamics between the media and the tech giants it covers.

The interplay between tech journalism and tech PR was underexplored. Through analyses of both tech media and the corporates’ crisis responses, this book examines the roots and characteristics of the Techlash, and provides explanations to ‘How did we get here?’. Insightful observations by tech journalists and tech public relations professionals are added to the research data, and together – they tell the story of the TECHLASH. It includes theoretical and practical implications for both tech enthusiasts and critics….(More)”.

Far-right news sources on Facebook more engaging


Study by Laura Edelson, Minh-Kha Nguyen, Ian Goldstein, Oana Goga, Tobias Lauinger, and Damon McCoy: Facebook has become a major way people find news and information in an increasingly politically polarized nation. We analyzed how users interacted with different types of posts promoted as news in the lead-up to and aftermath of the U.S. 2020 elections. We found that politically extreme sources tend to generate more interactions from users. In particular, content from sources rated as far-right by independent news rating services consistently received the highest engagement per follower of any partisan group. Additionally, frequent purveyors of far-right misinformation had on average 65% more engagement per follower than other far-right pages. We found:

  • Sources of news and information rated as far-right generate the highest average number of interactions per follower with their posts, followed by sources from the far-left, and then news sources closer to the center of the political spectrum.
  • Looking at the far-right, misinformation sources far outperform non-misinformation sources. Far-right sources designated as spreaders of misinformation had an average of 426 interactions per thousand followers per week, while non-misinformation sources had an average of 259 weekly interactions per thousand followers.
  • Engagement with posts from far-right and far-left news sources peaked around Election Day and again on January 6, the day of the certification of the electoral count and the U.S. Capitol riot. For posts from all other political leanings of news sources, the increase in engagement was much less intense.
  • Center and left partisan categories incur a misinformation penalty, while right-leaning sources do not. Center sources of misinformation, for example, performed about 70% worse than their non-misinformation counterparts. (Note: center sources of misinformation tend to be sites presenting as health news that have no obvious ideological orientation.)…(More)”.

Liability of online platforms


European Parliament Think Tank: “Given the central role that online platforms (OPs) play in the digital economy, questions arise about their responsibility in relation to illegal/harmful content or products hosted in the frame of their operation. Against this background, this study reviews the main legal/regulatory challenges associated with OP operations and analyses the incentives for OPs, their users and third parties to detect and remove illegal/harmful and dangerous material, content and/or products. To create a functional classification which can be used for regulatory purposes, it discusses the notion of OPs and attempts to categorise them under multiple criteria. The study then maps and critically assesses the whole range of OP liabilities, taking hard and soft law, self-regulation and national legislation into consideration, whenever relevant. Finally, the study puts forward policy options for an efficient EU liability regime: (i) maintaining the status quo; (ii) awareness-raising and media literacy; (iii)promoting self-regulation; (iv) establishing co-regulation mechanisms and tools; (v) adoptingstatutory legislation; (vi) modifying OPs’ secondaryliability by employing two different models – (a) byclarifying the conditions for liability exemptionsprovided by the e-Commerce Directive or (b) byestablishing a harmonised regime of liability….(More)”.

A New Way to Inoculate People Against Misinformation


Article by Jon Roozenbeek, Melisa Basol, and Sander van der Linden: “From setting mobile phone towers on fire to refusing critical vaccinations, we know the proliferation of misinformation online can have massive, real-world consequences.

For those who want to avert those consequences, it makes sense to try and correct misinformation. But as we now know, misinformation—both intentional and unintentional—is difficult to fight once it’s out in the digital wild. The pace at which unverified (and often false) information travels makes any attempt to catch up to, retrieve, and correct it an ambitious endeavour. We also know that viral information tends to stick, that repeated misinformation is more likely to be judged as true, and that people often continue to believe falsehoods even after they have been debunked.

Instead of fighting misinformation after it’s already spread, some researchers have shifted their strategy: they’re trying to prevent it from going viral in the first place, an approach known as “prebunking.” Prebunking attempts to explain how people can resist persuasion by misinformation. Grounded in inoculation theory, the approach uses the analogy of biological immunization. Just as weakened exposure to a pathogen triggers antibody production, inoculation theory posits that pre-emptively exposing people to a weakened persuasive argument builds people’s resistance against future manipulation.

But while inoculation is a promising approach, it has its limitations. Traditional inoculation messages are issue-specific, and have often remained confined to the particular context that you want to inoculate people against. For example, an inoculation message might forewarn people that false information is circulating encouraging people to drink bleach as a cure for the coronavirus. Although that may help stop bleach drinking, this messaging doesn’t pre-empt misinformation about other fake cures. As a result, prebunking approaches haven’t easily adapted to the changing misinformation landscape, making them difficult to scale.

However, our research suggests that there may be another way to inoculate people that preserves the benefits of prebunking: it may be possible to build resistance against misinformation in general, rather than fighting it one piece at a time….(More)”.