Using FOIA logs to develop news stories


Yilun Cheng at MuckRock: “In the fiscal year 2020, federal agencies received a total of 790,772 Freedom of Information Act (FOIA) requests. There are also tens of thousands of state and local agencies taking in and processing public record requests on a daily basis. Since most agencies keep a log of requests received, FOIA-minded reporters can find interesting story ideas by asking for and digging through the history of what other people are looking to obtain.

Some FOIA logs are posted on the websites of agencies that proactively release these records. Those that are not can be obtained through a FOIA request. There are a number of online resources that collect and store these documents, including MuckRockthe Black VaultGovernment Attic and FOIA Land.

Sorting through a FOIA log can be challenging since format differs from agency to agency. A more well-maintained log might include comprehensive information on the names of the requesters, the records being asked for, the dates of the requests’ receipt and the agency’s responses, as shown, for example, in a log released by the U.S. Department of Health and Human Services Agency.https://www.documentcloud.org/documents/20508483/annotations/2024702

But other departments –– the Cook County Department of Public Health, for instance –– might only send over a three-column spreadsheet with no descriptions of the nature of the requests.https://www.documentcloud.org/documents/20491259/annotations/2024703

As a result, learning how to negotiate with agencies and interpreting the content in their FOIA logs are crucial for journalists trying to understand the public record landscape. While some reporters only use FOIA logs to keep tabs on their competitors’ reporting interests, the potential of these documents goes far beyond this. Below are some tips for getting story inspiration from FOIA logs….(More)”.

Negligence, Not Politics, Drives Most Misinformation Sharing


John Timmer at Wired: “…a small international team of researchers… decided to take a look at how a group of US residents decided on which news to share. Their results suggest that some of the standard factors that people point to when explaining the tsunami of misinformation—inability to evaluate information and partisan biases—aren’t having as much influence as most of us think. Instead, a lot of the blame gets directed at people just not paying careful attention.

The researchers ran a number of fairly similar experiments to get at the details of misinformation sharing. This involved panels of US-based participants recruited either through Mechanical Turk or via a survey population that provided a more representative sample of the US. Each panel had several hundred to over 1,000 individuals, and the results were consistent across different experiments, so there was a degree of reproducibility to the data.

To do the experiments, the researchers gathered a set of headlines and lead sentences from news stories that had been shared on social media. The set was evenly mixed between headlines that were clearly true and clearly false, and each of these categories was split again between those headlines that favored Democrats and those that favored Republicans.

One thing that was clear is that people are generally capable of judging the accuracy of the headlines. There was a 56 percentage point gap between how often an accurate headline was rated as true and how often a false headline was. People aren’t perfect—they still got things wrong fairly often—but they’re clearly quite a bit better at this than they’re given credit for.

The second thing is that ideology doesn’t really seem to be a major factor in driving judgements on whether a headline was accurate. People were more likely to rate headlines that agreed with their politics, but the difference here was only 10 percentage points. That’s significant (both societally and statistically), but it’s certainly not a large enough gap to explain the flood of misinformation.

But when the same people were asked about whether they’d share these same stories, politics played a big role, and the truth receded. The difference in intention to share between true and false headlines was only 6 percentage points. Meanwhile the gap between whether a headline agreed with a person’s politics or not saw a 20 percentage point gap. Putting it in concrete terms, the authors look at the false headline “Over 500 ‘Migrant Caravaners’ Arrested With Suicide Vests.” Only 16 percent of conservatives in the survey population rated it as true. But over half of them were amenable to sharing it on social media….(More)”.

The Techlash and Tech Crisis Communication


Book by Nirit Weiss-Blatt: “This book provides an in-depth analysis of the evolution of tech journalism. The emerging tech-backlash is a story of pendulum swings: We are currently in tech-dystopianism after a long period spent in tech-utopianism. Tech companies were used to ‘cheerleading’ coverage of product launches. This long tech-press honeymoon ended, and was replaced by a new era of mounting criticism focused on tech’s negative impact on society. When and why did tech coverage shift? How did tech companies respond to the rise of tech criticism?

The book depicts three main eras: Pre-Techlash, Techlash, and Post-Techlash. The reader is taken on a journey from computer magazines, through tech blogs to the upsurge of tech investigative reporting. It illuminates the profound changes in the power dynamics between the media and the tech giants it covers.

The interplay between tech journalism and tech PR was underexplored. Through analyses of both tech media and the corporates’ crisis responses, this book examines the roots and characteristics of the Techlash, and provides explanations to ‘How did we get here?’. Insightful observations by tech journalists and tech public relations professionals are added to the research data, and together – they tell the story of the TECHLASH. It includes theoretical and practical implications for both tech enthusiasts and critics….(More)”.

Far-right news sources on Facebook more engaging


Study by Laura Edelson, Minh-Kha Nguyen, Ian Goldstein, Oana Goga, Tobias Lauinger, and Damon McCoy: Facebook has become a major way people find news and information in an increasingly politically polarized nation. We analyzed how users interacted with different types of posts promoted as news in the lead-up to and aftermath of the U.S. 2020 elections. We found that politically extreme sources tend to generate more interactions from users. In particular, content from sources rated as far-right by independent news rating services consistently received the highest engagement per follower of any partisan group. Additionally, frequent purveyors of far-right misinformation had on average 65% more engagement per follower than other far-right pages. We found:

  • Sources of news and information rated as far-right generate the highest average number of interactions per follower with their posts, followed by sources from the far-left, and then news sources closer to the center of the political spectrum.
  • Looking at the far-right, misinformation sources far outperform non-misinformation sources. Far-right sources designated as spreaders of misinformation had an average of 426 interactions per thousand followers per week, while non-misinformation sources had an average of 259 weekly interactions per thousand followers.
  • Engagement with posts from far-right and far-left news sources peaked around Election Day and again on January 6, the day of the certification of the electoral count and the U.S. Capitol riot. For posts from all other political leanings of news sources, the increase in engagement was much less intense.
  • Center and left partisan categories incur a misinformation penalty, while right-leaning sources do not. Center sources of misinformation, for example, performed about 70% worse than their non-misinformation counterparts. (Note: center sources of misinformation tend to be sites presenting as health news that have no obvious ideological orientation.)…(More)”.

Liability of online platforms


European Parliament Think Tank: “Given the central role that online platforms (OPs) play in the digital economy, questions arise about their responsibility in relation to illegal/harmful content or products hosted in the frame of their operation. Against this background, this study reviews the main legal/regulatory challenges associated with OP operations and analyses the incentives for OPs, their users and third parties to detect and remove illegal/harmful and dangerous material, content and/or products. To create a functional classification which can be used for regulatory purposes, it discusses the notion of OPs and attempts to categorise them under multiple criteria. The study then maps and critically assesses the whole range of OP liabilities, taking hard and soft law, self-regulation and national legislation into consideration, whenever relevant. Finally, the study puts forward policy options for an efficient EU liability regime: (i) maintaining the status quo; (ii) awareness-raising and media literacy; (iii)promoting self-regulation; (iv) establishing co-regulation mechanisms and tools; (v) adoptingstatutory legislation; (vi) modifying OPs’ secondaryliability by employing two different models – (a) byclarifying the conditions for liability exemptionsprovided by the e-Commerce Directive or (b) byestablishing a harmonised regime of liability….(More)”.

A New Way to Inoculate People Against Misinformation


Article by Jon Roozenbeek, Melisa Basol, and Sander van der Linden: “From setting mobile phone towers on fire to refusing critical vaccinations, we know the proliferation of misinformation online can have massive, real-world consequences.

For those who want to avert those consequences, it makes sense to try and correct misinformation. But as we now know, misinformation—both intentional and unintentional—is difficult to fight once it’s out in the digital wild. The pace at which unverified (and often false) information travels makes any attempt to catch up to, retrieve, and correct it an ambitious endeavour. We also know that viral information tends to stick, that repeated misinformation is more likely to be judged as true, and that people often continue to believe falsehoods even after they have been debunked.

Instead of fighting misinformation after it’s already spread, some researchers have shifted their strategy: they’re trying to prevent it from going viral in the first place, an approach known as “prebunking.” Prebunking attempts to explain how people can resist persuasion by misinformation. Grounded in inoculation theory, the approach uses the analogy of biological immunization. Just as weakened exposure to a pathogen triggers antibody production, inoculation theory posits that pre-emptively exposing people to a weakened persuasive argument builds people’s resistance against future manipulation.

But while inoculation is a promising approach, it has its limitations. Traditional inoculation messages are issue-specific, and have often remained confined to the particular context that you want to inoculate people against. For example, an inoculation message might forewarn people that false information is circulating encouraging people to drink bleach as a cure for the coronavirus. Although that may help stop bleach drinking, this messaging doesn’t pre-empt misinformation about other fake cures. As a result, prebunking approaches haven’t easily adapted to the changing misinformation landscape, making them difficult to scale.

However, our research suggests that there may be another way to inoculate people that preserves the benefits of prebunking: it may be possible to build resistance against misinformation in general, rather than fighting it one piece at a time….(More)”.

The Janus Face of the Liberal International Information Order: When Global Institutions Are Self-Undermining


Paper by Henry Farrell and Abraham L. Newman: “Scholars and policymakers long believed that norms of global information openness and private-sector governance helped to sustain and promote liberalism. These norms are being increasingly contested within liberal democracies. In this article, we argue that a key source of debate over the Liberal International Information Order (LIIO), a sub-order of the Liberal International Order (LIO), is generated internally by “self-undermining feedback effects,” that is, mechanisms through which institutional arrangements undermine their own political conditions of survival over time. Empirically, we demonstrate how global governance of the Internet, transnational disinformation campaigns, and domestic information governance interact to sow the seeds of this contention. In particular, illiberal states converted norms of openness into a vector of attack, unsettling political bargains in liberal states concerning the LIIO. More generally, we set out a broader research agenda to show how the international relations discipline might better understand institutional change as well as the informational aspects of the current crisis in the LIO….(More)”

Twitter’s misinformation problem is much bigger than Trump. The crowd may help solve it.


Elizabeth Dwoskin at the Washington Post: “A pilot program called Birdwatch lets selected users write corrections and fact checks on potentially misleading tweets…

The presidential election is over, but the fight against misinformation continues.

The latest volley in that effort comes from Twitter, which on MondayannouncedBirdwatch, a pilot project that uses crowdsourcing techniques to combat falsehoods and misleading statements on its service.

The pilot, which is open to only about 1,000 select users who can apply to be contributors, will allow people to write notes with corrections and accurate information directly into misleading tweets — a method that has the potential to get quality information to people more quickly than traditional fact-checking. Fact checks that are rated by other contributors as high quality may get bumped up or rewarded with greater visibility.

Birdwatch represents Twitter’s most experimental response to one of the biggest lessons that social media companies drew from the historic events of 2020: that their existing efforts to combat misinformation — including labeling, fact-checking and sometimes removing content — were not enough to prevent falsehoods about a stolen election or the coronavirus from reaching and influencing broad swaths of the population. Researchers who studied enforcement actions by social media companies last year found that fact checks and labels are usually implemented too late, after a post or a tweet has gone viral.

The Birdwatch project — which for the duration of the pilot will function as a separate website — is novel in that it attempts to build new mechanisms into Twitter’s product that foreground fact-checking by its community of 187 million daily users worldwide. Rather than having to comb through replies to tweets to sift through what’s true or false — or having Twitter employees append to a tweet a label providing additional context — users will be able to click on a separate notes folder attached to a tweet where they can see the consensus-driven responses from the community. Twitter will have a team reviewing winning responses to prevent manipulation, though a major question is whether any part of the process will be automated and therefore more easily gamed….(More)”

From Journalistic Ethics To Fact-Checking Practices: Defining The Standards Of Content Governance In The Fight Against Disinformation


Paper by Paolo Cavaliere: “This article claims that the practices undertaken by digital platforms to counter disinformation, under the EU Action Plan against Disinformation and the Code of Practice, mark a shift in the governance of news media content. While professional journalism standards have been used for long, both within and outside the industry, to assess the accuracy of news content and adjudicate on media conduct, the platforms are now resolving to different fact-checking routines to moderate and curate their content.
The article will demonstrate how fact-checking organisations have different working methods than news operators and ultimately understand and assess ‘accuracy’ in different ways. As a result, this new and enhanced role for platforms and fact-checkers as curators of content impacts on how content is distributed to the audience and, thus, on media freedom. Depending on how the fact-checking standards and working routines will consolidate in the near future, however, this trend offers an actual opportunity to improve the quality of news and the right to receive information…(More)”.

Rescuing Our Democracy by Rethinking New York Times Co. v. Sullivan


Paper by David Andrew Logan: “New York Times v. Sullivan (1964) is an iconic decision, foundational to modern First Amendment theory, and in a string of follow-on decisions the Court firmly grounded free speech theory and practice in the need to protect democratic discourse. To do this the Court provided broad and deep protections to the publishers of falsehoods. This article recognizes that New York Times and its progeny made sense in the “public square” of an earlier era, but the justices could never have foreseen the dramatic changes in technology and the media environment in the years since, nor predict that by making defamation cases virtually impossible to win they were harming, rather than helping self-government. In part because of New York Times, the First Amendment has been weaponized, frustrating a basic requirement of a healthy democracy: the development of a set of broadly agreed-upon facts. Instead, we are subject to waves of falsehoods that swamp the ability of citizens to effectively self-govern. As a result, and despite its iconic status, New York Times needs to be reexamined and retooled to better serve our democracy….(More)”