Social-media reform is flying blind


Paper by Chris Bail: “As Russia continues its ruthless war in Ukraine, pundits are speculating what social-media platforms might have done years ago to undermine propaganda well before the attack. Amid accusations that social media fuels political violence — and even genocide — it is easy to forget that Facebook evolved from a site for university students to rate each other’s physical attractiveness. Instagram was founded to facilitate alcohol-based gatherings. TikTok and YouTube were built to share funny videos.

The world’s social-media platforms are now among the most important forums for discussing urgent social problems, such as Russia’s invasion of Ukraine, COVID-19 and climate change. Techno-idealists continue to promise that these platforms will bring the world together — despite mounting evidence that they are pulling us apart.

Efforts to regulate social media have largely stalled, perhaps because no one knows what something better would look like. If we could hit ‘reset’ and redesign our platforms from scratch, could we make them strengthen civil society?

Researchers have a hard time studying such questions. Most corporations want to ensure studies serve their business model and avoid controversy. They don’t share much data. And getting answers requires not just making observations, but doing experiments.

In 2017, I co-founded the Polarization Lab at Duke University in Durham, North Carolina. We have created a social-media platform for scientific research. On it, we can turn features on and off, and introduce new ones, to identify those that improve social cohesion. We have recruited thousands of people to interact with each other on these platforms, alongside bots that can simulate social-media users.

We hope our effort will help to evaluate some of the most basic premises of social media. For example, tech leaders have long measured success by the number of connections people have. Anthropologist Robin Dunbar has suggested that humans struggle to maintain meaningful relationships with more than 150 people. Experiments could encourage some social-media users to create deeper connections with a small group of users while allowing others to connect with anyone. Researchers could investigate the optimal number of connections in different situations, to work out how to optimize breadth of relationships without sacrificing depth.

A related question is whether social-media platforms should be customized for different societies or groups. Although today’s platforms seem to have largely negative effects on US and Western-Europe politics, the opposite might be true in emerging democracies (P. Lorenz-Spreen et al. Preprint at https://doi.org/hmq2; 2021). One study suggested that Facebook could reduce ethnic tensions in Bosnia–Herzegovina (N. Asimovic et al. Proc. Natl Acad. Sci. USA 118, e2022819118; 2021), and social media has helped Ukraine to rally support around the world for its resistance….(More)”.

The need to represent: How AI can help counter gender disparity in the news


Blog by Sabrina Argoub: “For the first in our new series of JournalismAI Community Workshops, we decided to look at three recent projects that demonstrate how AI can help raise awareness on issues with misrepresentation of women in the news. 

The Political Misogynistic Discourse Monitor is a web application and API that journalists from AzMina, La Nación, CLIP, and DataCrítica developed to uncover hate speech against women on Twitter.

When Women Make Headlines is an analysis by The Pudding of the (mis)representation of women in news headlines, and how it has changed over time. 

In the AIJO project, journalists from eight different organisations worked together to identify and mitigate biases in gender representation in news. 

We invited, Bàrbara Libório of AzMina, Sahiti Sarva of The Pudding, and Delfina Arambillet of La Nación, to walk us through their projects and share insights on what they learned and how they taught the machine to recognise what constitutes bias and hate speech….(More)”.

Controversy Mapping: A Field Guide


Book by Tommaso Venturini, and Anders Kristian Munk: “As disputes concerning the environment, the economy, and pandemics occupy public debate, we need to learn to navigate matters of public concern when facts are in doubt and expertise is contested.

Controversy Mapping is the first book to introduce readers to the observation and representation of contested issues on digital media. Drawing on actor-network theory and digital methods, Venturini and Munk outline the conceptual underpinnings and the many tools and techniques of controversy mapping. They review its history in science and technology studies, discuss its methodological potential, and unfold its political implications. Through a range of cases and examples, they demonstrate how to chart actors and issues using digital fieldwork and computational techniques. A preface by Richard Rogers and an interview with Bruno Latour are also included.

A crucial field guide and hands-on companion for the digital age, Controversy Mapping is an indispensable resource for students and scholars of media and communication, as well as activists, journalists, citizens, and decision makers…(More)”.

How to avoid sharing bad information about Russia’s invasion of Ukraine


Abby Ohlheiser at MIT Technology Review: “The fast-paced online coverage of the Russian invasion of Ukraine on Wednesday followed a pattern that’s become familiar in other recent crises that have unfolded around the world. Photos, videos, and other information are posted and reshared across platforms much faster than they can be verified.

The result is that falsehoods are mistaken for truth and amplified, even by well-intentioned people. This can help bad actors to terrorize innocent civilians or advance disturbing ideologies, causing real harm.

Disinformation has been a prominent and explicit part of the Russian government’s campaign to justify the invasion. Russia falsely claimed that Ukrainian forces in Donbas, a city in the southeastern part of the country that harbors a large number of pro-Russian separatists, were planning violent attacks, engaging in antagonistic shelling, and committing genocide. Fake videos of those nonexistent attacks became part of a domestic propaganda campaign. (The US government, meanwhile, has been working to debunk and “prebunk” these lies.)

Meanwhile, even people who are not part of such government campaigns may intentionally share bad, misleading, or false information about the invasion to promote ideological narratives, or simply to harvest clicks, with little care about the harm they’re causing. In other cases, honest mistakes made amid the fog of war take off and go viral….

Your attention matters …

First, realize that what you do online makes a difference. “People often think that because they’re not influencers, they’re not politicians, they’re not journalists, that what they do [online] doesn’t matter,” Whitney Phillips, an assistant professor of communication and rhetorical studies at Syracuse University, told me in 2020. But it does matter. Sharing dubious information with even a small circle of friends and family can lead to its wider dissemination.

… and so do your angry quote tweets and duets.

While an urgent news story is developing, well-meaning people may quote, tweet, share, or duet with a post on social media to challenge and condemn it. Twitter and Facebook have introduced new rules, moderation tactics, and fact-checking provisions to try to combat misinformation. But interacting with misinformation at all risks amplifying the content you’re trying to minimize, because it signals to the platform that you find it interesting. Instead of engaging with a post you know to be wrong, try flagging it for review by the platform where you saw it.

Stop.

Mike Caulfield, a digital literacy expert, developed a method for evaluating online information that he calls SIFT: “Stop, Investigate the source, Find better coverage, and Trace claims, quotes, and media to the original context.” When it comes to news about Ukraine, he says, the emphasis should be on “Stop”—that is, pause before you react to or share what you’re seeing….(More)”.

Russian disinformation frenzy seeds groundwork for Ukraine invasion


Zachary Basu and Sara Fischer at Axios: “Russia is testing its agility at weaponizing state media to win backing at home, in occupied territories in eastern Ukraine and with sympathizers abroad for a war of aggression.

The big picture: State media has pivoted from accusing the West of hysterical warnings about a non-existent invasion to pumping out minute-by-minute coverage of the tensions.

Zoom in: NewsGuard, a misinformation tech firm, identified three of the most common false narratives being propagated by Russian state media like RT, Sputnik News, and TASS:

  1. The West staged a coup in 2014 to overthrow the Ukrainian government
  2. Ukrainian politics is dominated by Nazi ideology
  3. Ethnic Russians in Ukraine’s Donbas region have been subjected to genocide

Between the lines: Social media platforms have been on high alert for Russian disinformation that would violate their policies but have less control over private messaging, where some propaganda efforts have moved to avoid detection.

  • A Twitter spokesperson notes: “As we do around major global events, our safety and integrity teams are monitoring for potential risks associated with conflicts to protect the health of the platform.”
  • YouTube’s threat analysis group and trust and safety teams have also been closely monitoring the situation in Ukraine. The platform’s policies ban misleading titles, thumbnails or descriptions that trick users into believing the content is something it is not….(More)”.

EU and US legislation seek to open up digital platform data


Article by Brandie Nonnecke and Camille Carlton: “Despite the potential societal benefits of granting independent researchers access to digital platform data, such as promotion of transparency and accountability, online platform companies have few legal obligations to do so and potentially stronger business incentives not to. Without legally binding mechanisms that provide greater clarity on what and how data can be shared with independent researchers in privacy-preserving ways, platforms are unlikely to share the breadth of data necessary for robust scientific inquiry and public oversight.

Here, we discuss two notable, legislative efforts aimed at opening up platform data: the Digital Services Act (DSA), recently approved by the European Parliament, and the Platform Accountability and Transparency Act (PATA), recently proposed by several US senators. Although the legislation could support researchers’ access to data, they could also fall short in many ways, highlighting the complex challenges in mandating data access for independent research and oversight.

As large platforms take on increasingly influential roles in our online social, economic, and political interactions, there is a growing demand for transparency and accountability through mandated data disclosures. Research insights from platform data can help, for example, to understand unintended harms of platform use on vulnerable populations, such as children and marginalized communities; identify coordinated foreign influence campaigns targeting elections; and support public health initiatives, such as documenting the spread of antivaccine mis-and disinformation…(More)”.

Metrics at Work: Journalism and the Contested Meaning of Algorithms


Book by Angèle Christin: “When the news moved online, journalists suddenly learned what their audiences actually liked, through algorithmic technologies that scrutinize web traffic and activity. Has this advent of audience metrics changed journalists’ work practices and professional identities? In Metrics at Work, Angèle Christin documents the ways that journalists grapple with audience data in the form of clicks, and analyzes how new forms of clickbait journalism travel across national borders.

Drawing on four years of fieldwork in web newsrooms in the United States and France, including more than one hundred interviews with journalists, Christin reveals many similarities among the media groups examined—their editorial goals, technological tools, and even office furniture. Yet she uncovers crucial and paradoxical differences in how American and French journalists understand audience analytics and how these affect the news produced in each country. American journalists routinely disregard traffic numbers and primarily rely on the opinion of their peers to define journalistic quality. Meanwhile, French journalists fixate on internet traffic and view these numbers as a sign of their resonance in the public sphere. Christin offers cultural and historical explanations for these disparities, arguing that distinct journalistic traditions structure how journalists make sense of digital measurements in the two countries.

Contrary to the popular belief that analytics and algorithms are globally homogenizing forces, Metrics at Work shows that computational technologies can have surprisingly divergent ramifications for work and organizations worldwide…(More)”.

Why people believe misinformation and resist correction


TechPolicyPress: “…In Nature, a team of nine researchers from the fields of psychology, mass media & communication have published a review of available research on the factors that lead people to “form or endorse misinformed views, and the psychological barriers” to changing their minds….

The authors summarize what is known about a variety of drivers of false beliefs, noting that they “generally arise through the same mechanisms that establish accurate beliefs” and the human weakness for trusting the “gut”. For a variety of reasons, people develop shortcuts when processing information, often defaulting to conclusions rather than evaluating new information critically. A complex set of variables related to information sources, emotional factors and a variety of other cues can lead to the formation of false beliefs. And, people often share information with little focus on its veracity, but rather to accomplish other goals- from self-promotion to signaling group membership to simply sating a desire to ‘watch the world burn’.

Source: Nature Reviews: Psychology, Volume 1, January 022

Barriers to belief revision are also complex, since “the original information is not simply erased or replaced” once corrective information is introduced. There is evidence that misinformation can be “reactivated and retrieved” even after an individual receives accurate information that contradicts it. A variety of factors affect whether correct information can win out. One theory looks at how information is integrated in a person’s “memory network”. Another complementary theory looks at “selective retrieval” and is backed up by neuro-imaging evidence…(More)”.

The Crowdsourced Panopticon


Book by Jeremy Weissman: “Behind the omnipresent screens of our laptops and smartphones, a digitally networked public has quickly grown larger than the population of any nation on Earth. On the flipside, in front of the ubiquitous recording devices that saturate our lives, individuals are hyper-exposed through a worldwide online broadcast that encourages the public to watch, judge, rate, and rank people’s lives. The interplay of these two forces – the invisibility of the anonymous crowd and the exposure of the individual before that crowd – is a central focus of this book. Informed by critiques of conformity and mass media by some of the greatest philosophers of the past two centuries, as well as by a wide range of historical and empirical studies, Weissman helps shed light on what may happen when our lives are increasingly broadcast online for everyone all the time, to be judged by the global community…(More)”.

Updated Selected Readings on Inaccurate Data, Half-Truths, Disinformation, and Mob Violence


By Fiona Cece, Uma Kalkar, Stefaan Verhulst, and Andrew J. Zahuranec

As part of an ongoing effort to contribute to current topics in data, technology, and governance, The GovLab’s Selected Readings series provides an annotated and curated collection of recommended works on themes such as open data, data collaboration, and civic technology.

In this edition, we reflect on the one-year anniversary of the January 6, 2021 Capitol Hill Insurrection and its implications of disinformation and data misuse to support malicious objectives. This selected reading builds on the previous edition, published last year, on misinformation’s effect on violence and riots. Readings are listed in alphabetical order. New additions are highlighted in green. 

The mob attack on the US Congress was alarming and the result of various efforts to undermine the trust in and legitimacy of longstanding democratic processes and institutions. The use of inaccurate data, half-truths, and disinformation to spread hate and division is considered a key driver behind last year’s attack. Altering data to support conspiracy theories or challenging and undermining the credibility of trusted data sources to allow for alternative narratives to flourish, if left unchallenged, has consequences — including the increased acceptance and use of violence both offline and online.

The January 6th insurrection was unfortunately not a unique event, nor was it contained to the United States. While efforts to bring perpetrators of the attack to justice have been fruitful, much work remains to be done to address the willful dissemination of disinformation online. Below, we provide a curation of findings and readings that illustrate the global danger of inaccurate data, half-truths, and disinformation. As well, The GovLab, in partnership with the OECD, has explored data-actionable questions around how disinformation can spread across and affect society, and ways to mitigate it. Learn more at disinformation.the100questions.org.

To suggest additional readings on this or any other topic, please email info@thelivinglib.org. All our Selected Readings can be found here.

Readings and Annotations

Al-Zaman, Md. Sayeed. “Digital Disinformation and Communalism in Bangladesh.” China Media Research 15, no. 2 (2019): 68–76.

  • Md. Sayeed Al-Zaman, Lecturer at Jahangirnagar University in Bangladesh, discusses how the country’s increasing number of “netizens” are being manipulated by online disinformation and inciting violence along religious lines. Social media helps quickly spread Anti-Hindu and Buddhist rhetoric, inflaming religious divisions between these groups and Bangladesh’s Muslim majority, impeding possibilities for “peaceful coexistence.”
  • Swaths of online information make it difficult to fact-check, and alluring stories that feed on people’s fear and anxieties are highly likely to be disseminated, leading to a spread of rumors across Bangladesh. Moreover, disruptors and politicians wield religion to target citizens’ emotionality and create violence.
  • Al-Zaman recounts two instances of digital disinformation and communalism. First, in 2016, following a Facebook post supposedly criticizing Islam, riots destroyed 17 templates and 100 houses in Nasrinagar and led to protests in neighboring villages. While the exact source of the disinformation post was never confirmed, a man was beaten and jailed for it despite robust evidence of his wrongdoing. Second, in 2012, after a Facebook post circulated an image of someone desecrating the Quran tagged a Buddhist youth in the picture, 12 Buddhist monasteries and 100 houses in Ramu were destroyed. Through social media, a mob of over 6,000 people, including local Muslim community leaders, attacked the town of Ramu. Later investigation found that the image had been doctored and spread by an Islamic extremist group member in a coordinated attack, manipulating Islamic religious sentiment via fake news to target Buddhist minorities.

Banaji, Shakuntala, and Ram Bhat. “WhatsApp Vigilantes: An exploration of citizen reception and circulation of WhatsApp misinformation linked to mob violence in India.” London School of Economics and Political Science, 2019.

  • London School of Economics and Political Science Associate Professor Shakuntala Banaji and Researcher Ram Bhat articulate how discriminated groups (Dalits, Muslims, Christians, and Adivasis) have been targeted by peer-to-peer communications spreading allegations of bovine related issues, child-snatching, and organ harvesting, culminating in violence against these groups with fatal consequences.
  • WhatsApp messages work in tandem with ideas, tropes, messages, and stereotypes already in the public domain, providing “verification” of fake news.
  • WhatsApp use is gendered, and users are predisposed to believe misinformation and spread misinformation, particularly if it targets a discriminated group that they already have negative and discriminatory feelings towards.
  • Among most WhatsApp users, civic trust is based on ideological, family, and community ties.
  • Restricting sharing, tracking, and reporting of misinformation using “beacon” features and imposing penalties on groups can serve to mitigate the harmful effects of fake news.

Funke, Daniel, and Susan Benkelman. “Misinformation is inciting violence around the world. And tech platforms don’t seem to have a plan to stop it.” Poynter, April 4, 2019.

  • Misinformation leading to violence has been on the rise worldwide. PolitiFact writer Daniel Funke and Susan Benkelman, former Director of Accountability Journalism at the American Press Institute, point to mob violence against Romas in France after rumors of kidnapping attempts circulated on Facebook and Snapchat; the immolation of two men in Puebla, Mexico following fake news spread on Whatsapp of a gang of organ harvesters on the prowl; and false kidnapping claims sent through Whatsapp fueling lynch mobs in India.
  • Slow (re)action to fake news allows mis/disinformation to prey on vulnerable people and infiltrate society. Examples covered in the article discuss how fake news preys on older Americans who lack strong digital literacy. Virulent online rumors have made it difficult for citizens to separate fact from fiction during the Indian general election. Foreign adversaries like Russia are bribing Facebook users for their accounts in order to spread false political news in Ukraine.
  • The article notes that increases in violence caused by disinformation are doubly enabled by “a lack of proper law enforcement” and inaction by technology companies. Facebook, Youtube, and Whatsapp have no coordinated, comprehensive plans to fight fake news and attempt to shift responsibility to “fact-checking partners.” Troublingly, it appears that some platforms deliberately delay the removal of mis/disinformation to attract more engagement. Only once facing intense pressure from policymakers does it seem that these companies remove misleading information.

Kyaw, Nyi Nyi. “Facebooking in Myanmar: From Hate Speech to Fake News to Partisan Political Communication.” ISEAS — Yusof Ishak Institute, no. 36 (2019): 1–10.

  • In the past decade, the number of plugged-in Myanmar citizens has skyrocketed to 39% of the population. All of these 21 million internet users are active on Facebook, where much political rhetoric occurs. Widespread fake news disseminated through Facebook has led to an increase in anti-Muslim sentiment and the spread of misleading, inflammatory headlines.
  • Attempts to curtail fake news on Facebook are difficult. In Myanmar, a developing country where “the rule of law is weak,” monitoring and regulation on social media is not easily enforceable. Criticism from Myanmar and international governments and civil society organizations resulted in Facebook banning and suspending fake news accounts and pages and employing stricter, more invasive monitoring of citizen Facebook use — usually without their knowledge. However, despite Facebook’s key role in agitating and spreading fake news, no political or oversight bodies have “explicitly held the company accountable.”
  • Nyi Nyi Kyaw, Visiting Fellow at the Yusof Ishak Institute in Singapore, notes a cyber law initiative set in motion by the Myanmar government to strengthen social media monitoring methods but is wary of Myanmar’s “human and technological capacity” to enforce these regulations.

Lewandowsky, Stephan, & Sander van der Linden. “Countering Misinformation and Fake News Through Inoculation and Prebunking.” European Review of Social Psychology 32, no. 2, (2020): 348-384.

  • Researchers Stephan Lewandowsky and Sander van der Linden present a scan of conventional instances and tools to combat misinformation. They note the staying power and spread of sensational sound bites, especially in the political arena, and their real-life consequences on problems such as anti-vaccination campaigns, ethnically-charged violence in Myanmar, and mob lynchings in India spurred by Whatsapp rumors.
  • To proactively stop misinformation, the authors introduce the psychological theory of “inoculation,” which forewarns people that they have been exposed to misinformation and alerts them to the ways by which they could be misled to make them more resilient to false information. The paper highlights numerous successes of inoculation in combating misinformation and presents it as a strategy to prevent disinformation-fueled violence.
  • The authors then discuss best strategies to deploy fake news inoculation and generate “herd” cognitive immunity in the face of microtargeting and filter bubbles online.

Osmundsen, Mathias, Alexander Bor, Peter Bjerregaard Vahlstrup, Anja Bechmann, and Michael Bang Petersen. “Partisan polarization is the primary psychological motivation behind “fake news” sharing on Twitter.” American Political Science Review, 115, no.3, (2020): 999-1015.

  • Mathias Osmundsen and colleagues explore the proliferation of fake news on digital platforms. Are those who share fake news “ignorant and lazy,” malicious actors, or playing political games online? Through a psychological mapping of over 2,000 Twitter users across 500,000 stories, the authors find that disruption and polarization fuel fake news dissemination more so than ignorance.
  • Given the increasingly polarized American landscape, spreading fake news can help spread “partisan feelings,” increase interparty social and political cohesion, and call supporters to incideniary and violent action. Thus, misinformation prioritizes usefulness to reach end goals over accuracy and veracity of information.
  • Overall, the authors find that those with low political awareness and media literacy are the least likely to share fake news. While older individuals were more likely to share fake news, the inability to identify real versus fake information was not a major contributor of motivating the spread of misinformation. 
  • For the most part, those who share fake news are knowledgeable about the political sphere and online spaces. They are primarily motivated to ‘troll’ or create online disruption, or to further their partisan stance. In the United States, right-leaning individuals are more likely to follow fake news because they “must turn to more extreme news sources” to find information aligned with their politics, while left-leaning people can find more credible sources from liberal and centrist outlets.

Piazza, James A. “Fake news: the effects of social media disinformation on domestic terrorism.” Dynamics of Asymmetric Conflict (2021): 1-23.

  • James A. Piazza of Pennsylvania State University examines the role of online misinformation in driving distrust, political extremism, and political violence. He reviews some of the ongoing literature on online misinformation and disinformation in driving these and other adverse outcomes.
  • Using data on incidents of terrorism from the Global Terrorism Database and three independent measures of disinformation derived from the Digital Society Project, Piazza finds “disinformation propagated through online social media outlets is statistically associated with increases in domestic terrorism in affected countries. The impact of disinformation on terrorism is mediated, significantly and substantially, through increased political polarization.”
  • Piazza notes that his results support other literature that shows the real-world effects of online disinformation. He emphasizes the need for further research and investigation to better understand the issue.

Posetti, Julie, Nermine Aboulez, Kalina Bontcheva, Jackie Harrison, and Silvio Waisbord. “Online violence Against Women Journalists: A Global Snapshot of Incidence and Impacts.” United Nations Educational, Scientific and Cultural Organization, 2020.

  • The survey focuses on incidence, impacts, and responses to online violence against women journalists that are a result of “coordinated disinformation campaigns leveraging misogyny and other forms of hate speech. There were 901 respondents, hailing from 125 countries, and covering various ethnicities.
  • 73% of women journalists reported facing online violence and harassment in the course of their work, suggesting escalating gendered violence against women in online media.
  • The impact of COVID-19 and populist politics is evident in the gender-based harassment and disinformation campaigns, the source of which is traced to political actors (37%) or anonymous/troll accounts (57%).
  • Investigative reporting on gender issues, politics and elections, immigration and human rights abuses, or fake news itself seems to attract online retaliation and targeted disinformation campaigns against the reporters.

Rajeshwari, Rema. “Mob Lynching and Social Media.” Yale Journal of International Affairs, June 1, 2019.

  • District Police Chief of Jogulamba Gadwal, India, and Yale World Fellow (’17) Rema Rajeshwari writes about how misinformation and disinformation are becoming a growing problem and security threat in India. The fake news phenomenon has spread hatred, fueled sectarian tensions, and continues to diminish social trust in society.
  • One example of this can be found in Jogulamba Gadwal, where videos and rumors were spread throughout social media about how the Parthis, a stigmatized tribal group, were committing acts of violence in the village. This led to a series of mob attacks and killings — “thirty-three people were killed in sixty-nine mob attacks since January 2018 due to rumors” — that could be traced to rumors spread on social media.
  • More importantly, however, Rajeshwari elaborates on how self-regulation and local campaigns can be used as an effective intervention for mis/dis-information. As a police officer, Rajeshwari fought a battle that was both online and on the ground, including the formation of a group of “tech-savvy” cops who could monitor local social media content and flag inaccurate and/or malicious posts, and mobilizing local WhatsApp groups alongside village headmen who could encourage community members to not forward fake messages. These interventions effectively combined local traditions and technology to achieve an “early warning-focused deterrence.”

Taylor, Luke. “Covid-19 Misinformation Sparks Threats and Violence against Doctors in Latin America.” BMJ (2020): m3088.

  • Journalist Luke Taylor details the many incidents of how disinformation campaigns across Latin America have resulted in the mistreatment of health care workers during the Coronavirus pandemic. Examining case studies from Mexico and Colombia, Taylor finds that these mis/disinformation campaigns have resulted in health workers receiving death threats and being subject to acts of aggression.
  • One instance of this link between disinformation and acts of aggression are the 47 reported cases of aggression towards health workers in Mexico and 265 reported complaints against health workers as well. The National Council to Prevent Discrimination noted these acts were the result of a loss of trust in government and government institutions, which was further exacerbated by conspiracy theories that circulated WhatsApp and other social media channels.
  • Another example of false narratives can be seen in Colombia, where a politician theorized that a “covid cartel” of doctors were admitting COVID-19 patients to ICUs in order to receive payments (e.g., a cash payment of ~17,000 Columbian pesos for every dead patient with a covid-19 diagnosis). This false narrative of doctors being incentivized to increase beds for COVID-19 patients quickly spread across social media platforms, resulting in many of those who were ill to avoid seeking care. This rumor also led to doctors in Colombia receiving death threats and intimidation acts.

“The Danger of Fake News in Inflaming or Suppressing Social Conflict.” Center for Information Technology and Society — University of California Santa Barbara, n.d.

  • The article provides case studies of how fake news can be used to intensify social conflict for political gains (e.g., by distracting citizens from having a conversation about critical issues and undermining the democratic process).
  • The cases elaborated upon are 1) Pizzagate: a fake news story that linked human trafficking to a presidential candidate and a political party, and ultimately led to a shooting; 2) Russia’s Internet Research Agency: Russian agents created social media accounts to spread fake news that favored Donald Trump during the 2016 election, and even instigated online protests about social issues (e.g., a BLM protest); and 3) Cambridge Analytica: a British company that used unauthorized social media data for sensationalistic and inflammatory targeted US political advertisements.
  • Notably, it points out that fake news undermines a citizen’s ability to participate in the democratic process and make accurate decisions in important elections.

Tworek, Heidi. “Disinformation: It’s History.” Center for International Governance Innovation, July 14, 2021.

  • While some public narratives frame online disinformation and its influence on real-world violence as “unprecedented and unparalleled” to occurrences in the past. Professor Heidi Tworek of the University of British Columbia points out that “assumptions about the history of disinformation” have (and continue to) influence policymaking to combat fake news. She argues that today’s unprecedented events are rooted in tactics similar to those of the past, such as how Finnish policymakers invested in national communications strategy to fight foreign disinformation coming from Russia and the Soviet Union.
  • She emphasizes the power of learning from historical events to guide modern methods of fighting political misinformation. Connecting today’s concerns of election fraud, foreign interference, and conspiracy theories to those of the past, such as “funding magazines [and] spreading rumors” on Soviet and American practices during the Cold War to further anti-opposition sentiment and hatred reinforces that disinformation is a long-standing problem.

Ward, Megan, and Jessica Beyer. “Vulnerable Landscapes: Case Studies of Violence and Disinformation” Wilson Center, August 2019.

  • This article discusses instances where disinformation inflamed already existing social, political, and ideological cleavages, and ultimately caused violence. Specifically, it elaborates on instances from the US-Mexico border, India, Sri Lanka, and during the course of three Latin American elections.
  • Though the cases are meant to be illustrative and highlight the spread of disinformation globally, the violence in these cases was shown to be affected by the distinct social fabric of each place. Their findings lend credence to the idea that disinformation helped spark violence in places that were already vulnerable and tense.
  • Indeed, now that disinformation can be so quickly distributed using social media, coupled with declining trust in public institutions, low levels of media literacy, meager actions taken by social media companies, and government actors who exploit disinformation for political gain, there has been a rise of these cases globally. It is an interaction of factors such as distrust in traditional media and public institutions, lack of content moderation on social media, and ethnic divides that render societies vulnerable and susceptible to violence.
  • One example of this is at the US/Mexico border, where disinformation campaigns have built on pre-existing xenophobia, and have led to instances of mob-violence and mass shootings. Inflamed by disinformation campaigns that migrant caravans contain criminals (e.g., invasion narratives often used to describe migrant caravans), the armed group United Constitutional Patriots (UCP) impersonated law enforcement and detained migrants at the US border, often turning them over to border officials. UCP has since been arrested by the FBI for impersonating law enforcement.

We welcome other sources we may have missed — please share any suggested additions with us at datastewards [at] thegovlab.org or The GovLab on Twitter.