European Parliament Think Tank: “Given the central role that online platforms (OPs) play in the digital economy, questions arise about their responsibility in relation to illegal/harmful content or products hosted in the frame of their operation. Against this background, this study reviews the main legal/regulatory challenges associated with OP operations and analyses the incentives for OPs, their users and third parties to detect and remove illegal/harmful and dangerous material, content and/or products. To create a functional classification which can be used for regulatory purposes, it discusses the notion of OPs and attempts to categorise them under multiple criteria. The study then maps and critically assesses the whole range of OP liabilities, taking hard and soft law, self-regulation and national legislation into consideration, whenever relevant. Finally, the study puts forward policy options for an efficient EU liability regime: (i) maintaining the status quo; (ii) awareness-raising and media literacy; (iii)promoting self-regulation; (iv) establishing co-regulation mechanisms and tools; (v) adoptingstatutory legislation; (vi) modifying OPs’ secondaryliability by employing two different models – (a) byclarifying the conditions for liability exemptionsprovided by the e-Commerce Directive or (b) byestablishing a harmonised regime of liability….(More)”.
Article by Jon Roozenbeek, Melisa Basol, and Sander van der Linden: “From setting mobile phone towers on fire to refusing critical vaccinations, we know the proliferation of misinformation online can have massive, real-world consequences.
For those who want to avert those consequences, it makes sense to try and correct misinformation. But as we now know, misinformation—both intentional and unintentional—is difficult to fight once it’s out in the digital wild. The pace at which unverified (and often false) information travels makes any attempt to catch up to, retrieve, and correct it an ambitious endeavour. We also know that viral information tends to stick, that repeated misinformation is more likely to be judged as true, and that people often continue to believe falsehoods even after they have been debunked.
Instead of fighting misinformation after it’s already spread, some researchers have shifted their strategy: they’re trying to prevent it from going viral in the first place, an approach known as “prebunking.” Prebunking attempts to explain how people can resist persuasion by misinformation. Grounded in inoculation theory, the approach uses the analogy of biological immunization. Just as weakened exposure to a pathogen triggers antibody production, inoculation theory posits that pre-emptively exposing people to a weakened persuasive argument builds people’s resistance against future manipulation.
But while inoculation is a promising approach, it has its limitations. Traditional inoculation messages are issue-specific, and have often remained confined to the particular context that you want to inoculate people against. For example, an inoculation message might forewarn people that false information is circulating encouraging people to drink bleach as a cure for the coronavirus. Although that may help stop bleach drinking, this messaging doesn’t pre-empt misinformation about other fake cures. As a result, prebunking approaches haven’t easily adapted to the changing misinformation landscape, making them difficult to scale.
However, our research suggests that there may be another way to inoculate people that preserves the benefits of prebunking: it may be possible to build resistance against misinformation in general, rather than fighting it one piece at a time….(More)”.
Paper by Henry Farrell and Abraham L. Newman: “Scholars and policymakers long believed that norms of global information openness and private-sector governance helped to sustain and promote liberalism. These norms are being increasingly contested within liberal democracies. In this article, we argue that a key source of debate over the Liberal International Information Order (LIIO), a sub-order of the Liberal International Order (LIO), is generated internally by “self-undermining feedback effects,” that is, mechanisms through which institutional arrangements undermine their own political conditions of survival over time. Empirically, we demonstrate how global governance of the Internet, transnational disinformation campaigns, and domestic information governance interact to sow the seeds of this contention. In particular, illiberal states converted norms of openness into a vector of attack, unsettling political bargains in liberal states concerning the LIIO. More generally, we set out a broader research agenda to show how the international relations discipline might better understand institutional change as well as the informational aspects of the current crisis in the LIO….(More)”
Elizabeth Dwoskin at the Washington Post: “A pilot program called Birdwatch lets selected users write corrections and fact checks on potentially misleading tweets…
The presidential election is over, but the fight against misinformation continues.
The latest volley in that effort comes from Twitter, which on MondayannouncedBirdwatch, a pilot project that uses crowdsourcing techniques to combat falsehoods and misleading statements on its service.
The pilot, which is open to only about 1,000 select users who can apply to be contributors, will allow people to write notes with corrections and accurate information directly into misleading tweets — a method that has the potential to get quality information to people more quickly than traditional fact-checking. Fact checks that are rated by other contributors as high quality may get bumped up or rewarded with greater visibility.
Birdwatch represents Twitter’s most experimental response to one of the biggest lessons that social media companies drew from the historic events of 2020: that their existing efforts to combat misinformation — including labeling, fact-checking and sometimes removing content — were not enough to prevent falsehoods about a stolen election or the coronavirus from reaching and influencing broad swaths of the population. Researchers who studied enforcement actions by social media companies last year found that fact checks and labels are usually implemented too late, after a post or a tweet has gone viral.
The Birdwatch project — which for the duration of the pilot will function as a separate website — is novel in that it attempts to build new mechanisms into Twitter’s product that foreground fact-checking by its community of 187 million daily users worldwide. Rather than having to comb through replies to tweets to sift through what’s true or false — or having Twitter employees append to a tweet a label providing additional context — users will be able to click on a separate notes folder attached to a tweet where they can see the consensus-driven responses from the community. Twitter will have a team reviewing winning responses to prevent manipulation, though a major question is whether any part of the process will be automated and therefore more easily gamed….(More)”
Paper by Paolo Cavaliere: “This article claims that the practices undertaken by digital platforms to counter disinformation, under the EU Action Plan against Disinformation and the Code of Practice, mark a shift in the governance of news media content. While professional journalism standards have been used for long, both within and outside the industry, to assess the accuracy of news content and adjudicate on media conduct, the platforms are now resolving to different fact-checking routines to moderate and curate their content.
The article will demonstrate how fact-checking organisations have different working methods than news operators and ultimately understand and assess ‘accuracy’ in different ways. As a result, this new and enhanced role for platforms and fact-checkers as curators of content impacts on how content is distributed to the audience and, thus, on media freedom. Depending on how the fact-checking standards and working routines will consolidate in the near future, however, this trend offers an actual opportunity to improve the quality of news and the right to receive information…(More)”.
Paper by David Andrew Logan: “New York Times v. Sullivan (1964) is an iconic decision, foundational to modern First Amendment theory, and in a string of follow-on decisions the Court firmly grounded free speech theory and practice in the need to protect democratic discourse. To do this the Court provided broad and deep protections to the publishers of falsehoods. This article recognizes that New York Times and its progeny made sense in the “public square” of an earlier era, but the justices could never have foreseen the dramatic changes in technology and the media environment in the years since, nor predict that by making defamation cases virtually impossible to win they were harming, rather than helping self-government. In part because of New York Times, the First Amendment has been weaponized, frustrating a basic requirement of a healthy democracy: the development of a set of broadly agreed-upon facts. Instead, we are subject to waves of falsehoods that swamp the ability of citizens to effectively self-govern. As a result, and despite its iconic status, New York Times needs to be reexamined and retooled to better serve our democracy….(More)”
Report by The FOIA Project: “The news media are powerful players in the world of government transparency and public accountability. One important tool for ensuring public accountability is through invoking transparency mandates provided by the Freedom of Information Act (FOIA). In 2020, news organizations and individual reporters filed 122 different FOIA suits to compel disclosure of federal government records—more than any year on record according to federal court data back to 2001 analyzed by the FOIA Project.
In fact, the media alone have filed a total of 386 FOIA cases during the four years of the Trump Administration, from 2017 through 2020. This is greater than the total of 311 FOIA media cases filed during the sixteen years of the Bush and Obama Administrations combined. Moreover, many of these FOIA cases were the very first FOIA cases filed by members of the news media. Almost as many new FOIA litigators filed their first case in court in the past four years—178 from 2017 to 2020—than the years 2001 to 2016, when 196 FOIA litigators filed their first case. Reporters made up the majority of these. During the past four years, more than four out of five of first-time litigators were individual reporters. The ranks of FOIA litigators thus expanded considerably during the Trump Administration, with more reporters challenging agencies in court for failing to provide records they are seeking, either alone or with their news organizations.
Using the FOIA Project’s unique dataset of FOIA cases filed in federal court, this report provides unprecedented and valuable insight into the rapid growth of media lawsuits designed to make the government more transparent and accountable to the public. The complete, updated list of news media cases, along with the names of organizations and reporters who filed these suits, is available on the News Media List at FOIAProject.org. Figure 1shows the total number of FOIA cases filed by the news each year. Counts are available in Appendix Table 1 at the end of this report….(More)”.
Nicole Gallucci at Mashable: “A lone hashtag might not look very mighty, but when used en masse, the symbols can become incredibly powerful activism tools.
Over the past two decades — largely since product designer Chris Messina pitched hashtags to Twitter in 2007 — activists have learned to harness the symbols to form online communities, raise awareness on pressing issues, organize protests, shape digital narratives, and redirect social media discourse.
On any given day, a series of hashtags are spotlighted in “Trending” section of Twitter. The hashtags featured are those that have gained traction online and reflect topics being heavily discussed in the moment. More often than not, a trending hashtag’s popularity is organic, but a hashtag’s origin and initial purpose can become clouded when people partake in a clever tactic called hashtag flooding.
Hashtag flooding, or the act of hijacking a hashtag on social media platforms to change its meaning, has been around for years. But in 2020, particularly in the months leading up to the presidential election, activists and social media users looking to make their voices heard used the technique to drown out hateful narratives.
From K-pop fans flooding Donald Trump-related hashtags to members of the gay community putting their own spin on the #ProudBoys hashtag, the method of online communication dominated timelines this year and should be in every activist’s playbook….(More)”.
Report by the Reuters Institute for the Study of Journalism: “…Trust in news has eroded worldwide. According to the Reuters Institute’s Digital News Report 2020, fewer than four in ten people (38%) across 40 markets say they typically trust most news (Newman et al. 2020). While trust has fallen by double digit margins in recent years in many places, including Brazil and the United Kingdom (Fletcher 2020), in other countries more stable overall trends conceal stark and growing partisan divides (see, for example, Jurkowitz et al. 2020).
Why is trust eroding, how does it play out across different contexts and different groups, what are the implications, and what might be done about it? These are the organising questions behind the Trust in News Project. This report is the first of many we will publish from the project over the next three years. Because trust is a relationship between trustors and trustees, we anticipate focusing primarily on audiences and the way they think about trust, but we begin the project by taking stock of how those who study journalism and those who practice it think about the subject. We want to be informed by their experiences and for our research to engage with how professional journalists and the news media approach trust so that it can be more useful in their work. Combining an extensive review of existing research on trust in news (including nearly 200 interdisciplinary publications) and original interviews on the subject (including 82 with journalists and other practitioners across several countries), we summarise some of what is known and unknown about trust, what is contributing to these trends, and how media organisations are seeking to address them in increasingly competitive digital environments.
Trust is not an abstract concern but part of the social foundations of journalism as a profession, news as an institution, and the media as a business. It is both important and dangerous, both for the public and for the news media – important for the public because being able to trust news helps people navigate and engage with the world, but dangerous because not everything is equally trustworthy; and important for the news media because the profession relies on it, but dangerous because it can be elusive and hard to regain when lost.
So if ‘trust is the new currency for success’, as the World Association of News Media has stated (Tjaardstra 2017), then how is it earned and what can this currency buy? For those who want to regain or retain it, it is not enough to do things that merely look good or feel good. Those things actually have to work or they risk making no difference, or worse, being counter-productive. Even when they do work, many of the choices involved in seeking to increase trust in accurate and reliable news may come with trade-offs. Our aim in the project is to gather actionable evidence to help journalists and news media make informed decisions about how best to address concerns around eroding trust….(More)”.
Book by Petros Iosifidis and Nicholas Nicoli: “Digital Democracy, Social Media and Disinformation discusses some of the political, regulatory and technological issues which arise from the increased power of internet intermediaries (such as Facebook, Twitter and YouTube) and the impact of the spread of digital disinformation, especially in the midst of a health pandemic.
The volume provides a detailed account of the main areas surrounding digital democracy, disinformation and fake news, freedom of expression and post-truth politics. It addresses the major theoretical and regulatory concepts of digital democracy and the ‘network society’ before offering potential socio-political and technological solutions to the fight against disinformation and fake news. These solutions include self-regulation, rebuttals and myth-busting, news literacy, policy recommendations, awareness and communication strategies and the potential of recent technologies such as the blockchain and public interest algorithms to counter disinformation.
After addressing what has currently been done to combat disinformation and fake news, the volume argues that digital disinformation needs to be identified as a multifaceted problem, one that requires multiple approaches to resolve. Governments, regulators, think tanks, the academy and technology providers need to take more steps to better shape the next internet with as little digital disinformation as possible by means of a regional analysis. In this context, two cases concerning Russia and Ukraine are presented regarding disinformation and the ways it was handled….(More)”