The Russian invasion shows how digital technologies have become involved in all aspects of war

Article by Katharina Niemeyer, Dominique Trudel, Heidi J.S. Tworek, Maria Silina and Svitlana Matviyenko: “Since Russia invaded Ukraine, we keep hearing that this war is like no other; because Ukrainians have cellphones and access to social media platforms, the traditional control of information and propaganda cannot work and people are able to see through the fog of war.

As communications scholars and historians, it is important to add nuance to such claims. The question is not so much what is “new” in this war, but rather to understand its specific media dynamics. One important facet of this war is the interplay between old and new media — the many loops that go from Twitter to television to TikTok, and back and forth.

We have moved away from a relatively static communication model, where journalists report on the news within predetermined constraints and formats, to intense fragmentation and even participation. Information about the war becomes content, and users contribute to its circulation by sharing and commenting online…(More)”.

Social-media reform is flying blind

Paper by Chris Bail: “As Russia continues its ruthless war in Ukraine, pundits are speculating what social-media platforms might have done years ago to undermine propaganda well before the attack. Amid accusations that social media fuels political violence — and even genocide — it is easy to forget that Facebook evolved from a site for university students to rate each other’s physical attractiveness. Instagram was founded to facilitate alcohol-based gatherings. TikTok and YouTube were built to share funny videos.

The world’s social-media platforms are now among the most important forums for discussing urgent social problems, such as Russia’s invasion of Ukraine, COVID-19 and climate change. Techno-idealists continue to promise that these platforms will bring the world together — despite mounting evidence that they are pulling us apart.

Efforts to regulate social media have largely stalled, perhaps because no one knows what something better would look like. If we could hit ‘reset’ and redesign our platforms from scratch, could we make them strengthen civil society?

Researchers have a hard time studying such questions. Most corporations want to ensure studies serve their business model and avoid controversy. They don’t share much data. And getting answers requires not just making observations, but doing experiments.

In 2017, I co-founded the Polarization Lab at Duke University in Durham, North Carolina. We have created a social-media platform for scientific research. On it, we can turn features on and off, and introduce new ones, to identify those that improve social cohesion. We have recruited thousands of people to interact with each other on these platforms, alongside bots that can simulate social-media users.

We hope our effort will help to evaluate some of the most basic premises of social media. For example, tech leaders have long measured success by the number of connections people have. Anthropologist Robin Dunbar has suggested that humans struggle to maintain meaningful relationships with more than 150 people. Experiments could encourage some social-media users to create deeper connections with a small group of users while allowing others to connect with anyone. Researchers could investigate the optimal number of connections in different situations, to work out how to optimize breadth of relationships without sacrificing depth.

A related question is whether social-media platforms should be customized for different societies or groups. Although today’s platforms seem to have largely negative effects on US and Western-Europe politics, the opposite might be true in emerging democracies (P. Lorenz-Spreen et al. Preprint at; 2021). One study suggested that Facebook could reduce ethnic tensions in Bosnia–Herzegovina (N. Asimovic et al. Proc. Natl Acad. Sci. USA 118, e2022819118; 2021), and social media has helped Ukraine to rally support around the world for its resistance….(More)”.

The need to represent: How AI can help counter gender disparity in the news

Blog by Sabrina Argoub: “For the first in our new series of JournalismAI Community Workshops, we decided to look at three recent projects that demonstrate how AI can help raise awareness on issues with misrepresentation of women in the news. 

The Political Misogynistic Discourse Monitor is a web application and API that journalists from AzMina, La Nación, CLIP, and DataCrítica developed to uncover hate speech against women on Twitter.

When Women Make Headlines is an analysis by The Pudding of the (mis)representation of women in news headlines, and how it has changed over time. 

In the AIJO project, journalists from eight different organisations worked together to identify and mitigate biases in gender representation in news. 

We invited, Bàrbara Libório of AzMina, Sahiti Sarva of The Pudding, and Delfina Arambillet of La Nación, to walk us through their projects and share insights on what they learned and how they taught the machine to recognise what constitutes bias and hate speech….(More)”.

Controversy Mapping: A Field Guide

Book by Tommaso Venturini, and Anders Kristian Munk: “As disputes concerning the environment, the economy, and pandemics occupy public debate, we need to learn to navigate matters of public concern when facts are in doubt and expertise is contested.

Controversy Mapping is the first book to introduce readers to the observation and representation of contested issues on digital media. Drawing on actor-network theory and digital methods, Venturini and Munk outline the conceptual underpinnings and the many tools and techniques of controversy mapping. They review its history in science and technology studies, discuss its methodological potential, and unfold its political implications. Through a range of cases and examples, they demonstrate how to chart actors and issues using digital fieldwork and computational techniques. A preface by Richard Rogers and an interview with Bruno Latour are also included.

A crucial field guide and hands-on companion for the digital age, Controversy Mapping is an indispensable resource for students and scholars of media and communication, as well as activists, journalists, citizens, and decision makers…(More)”.

How to avoid sharing bad information about Russia’s invasion of Ukraine

Abby Ohlheiser at MIT Technology Review: “The fast-paced online coverage of the Russian invasion of Ukraine on Wednesday followed a pattern that’s become familiar in other recent crises that have unfolded around the world. Photos, videos, and other information are posted and reshared across platforms much faster than they can be verified.

The result is that falsehoods are mistaken for truth and amplified, even by well-intentioned people. This can help bad actors to terrorize innocent civilians or advance disturbing ideologies, causing real harm.

Disinformation has been a prominent and explicit part of the Russian government’s campaign to justify the invasion. Russia falsely claimed that Ukrainian forces in Donbas, a city in the southeastern part of the country that harbors a large number of pro-Russian separatists, were planning violent attacks, engaging in antagonistic shelling, and committing genocide. Fake videos of those nonexistent attacks became part of a domestic propaganda campaign. (The US government, meanwhile, has been working to debunk and “prebunk” these lies.)

Meanwhile, even people who are not part of such government campaigns may intentionally share bad, misleading, or false information about the invasion to promote ideological narratives, or simply to harvest clicks, with little care about the harm they’re causing. In other cases, honest mistakes made amid the fog of war take off and go viral….

Your attention matters …

First, realize that what you do online makes a difference. “People often think that because they’re not influencers, they’re not politicians, they’re not journalists, that what they do [online] doesn’t matter,” Whitney Phillips, an assistant professor of communication and rhetorical studies at Syracuse University, told me in 2020. But it does matter. Sharing dubious information with even a small circle of friends and family can lead to its wider dissemination.

… and so do your angry quote tweets and duets.

While an urgent news story is developing, well-meaning people may quote, tweet, share, or duet with a post on social media to challenge and condemn it. Twitter and Facebook have introduced new rules, moderation tactics, and fact-checking provisions to try to combat misinformation. But interacting with misinformation at all risks amplifying the content you’re trying to minimize, because it signals to the platform that you find it interesting. Instead of engaging with a post you know to be wrong, try flagging it for review by the platform where you saw it.


Mike Caulfield, a digital literacy expert, developed a method for evaluating online information that he calls SIFT: “Stop, Investigate the source, Find better coverage, and Trace claims, quotes, and media to the original context.” When it comes to news about Ukraine, he says, the emphasis should be on “Stop”—that is, pause before you react to or share what you’re seeing….(More)”.

Russian disinformation frenzy seeds groundwork for Ukraine invasion

Zachary Basu and Sara Fischer at Axios: “Russia is testing its agility at weaponizing state media to win backing at home, in occupied territories in eastern Ukraine and with sympathizers abroad for a war of aggression.

The big picture: State media has pivoted from accusing the West of hysterical warnings about a non-existent invasion to pumping out minute-by-minute coverage of the tensions.

Zoom in: NewsGuard, a misinformation tech firm, identified three of the most common false narratives being propagated by Russian state media like RT, Sputnik News, and TASS:

  1. The West staged a coup in 2014 to overthrow the Ukrainian government
  2. Ukrainian politics is dominated by Nazi ideology
  3. Ethnic Russians in Ukraine’s Donbas region have been subjected to genocide

Between the lines: Social media platforms have been on high alert for Russian disinformation that would violate their policies but have less control over private messaging, where some propaganda efforts have moved to avoid detection.

  • A Twitter spokesperson notes: “As we do around major global events, our safety and integrity teams are monitoring for potential risks associated with conflicts to protect the health of the platform.”
  • YouTube’s threat analysis group and trust and safety teams have also been closely monitoring the situation in Ukraine. The platform’s policies ban misleading titles, thumbnails or descriptions that trick users into believing the content is something it is not….(More)”.

EU and US legislation seek to open up digital platform data

Article by Brandie Nonnecke and Camille Carlton: “Despite the potential societal benefits of granting independent researchers access to digital platform data, such as promotion of transparency and accountability, online platform companies have few legal obligations to do so and potentially stronger business incentives not to. Without legally binding mechanisms that provide greater clarity on what and how data can be shared with independent researchers in privacy-preserving ways, platforms are unlikely to share the breadth of data necessary for robust scientific inquiry and public oversight.

Here, we discuss two notable, legislative efforts aimed at opening up platform data: the Digital Services Act (DSA), recently approved by the European Parliament, and the Platform Accountability and Transparency Act (PATA), recently proposed by several US senators. Although the legislation could support researchers’ access to data, they could also fall short in many ways, highlighting the complex challenges in mandating data access for independent research and oversight.

As large platforms take on increasingly influential roles in our online social, economic, and political interactions, there is a growing demand for transparency and accountability through mandated data disclosures. Research insights from platform data can help, for example, to understand unintended harms of platform use on vulnerable populations, such as children and marginalized communities; identify coordinated foreign influence campaigns targeting elections; and support public health initiatives, such as documenting the spread of antivaccine mis-and disinformation…(More)”.

Metrics at Work: Journalism and the Contested Meaning of Algorithms

Book by Angèle Christin: “When the news moved online, journalists suddenly learned what their audiences actually liked, through algorithmic technologies that scrutinize web traffic and activity. Has this advent of audience metrics changed journalists’ work practices and professional identities? In Metrics at Work, Angèle Christin documents the ways that journalists grapple with audience data in the form of clicks, and analyzes how new forms of clickbait journalism travel across national borders.

Drawing on four years of fieldwork in web newsrooms in the United States and France, including more than one hundred interviews with journalists, Christin reveals many similarities among the media groups examined—their editorial goals, technological tools, and even office furniture. Yet she uncovers crucial and paradoxical differences in how American and French journalists understand audience analytics and how these affect the news produced in each country. American journalists routinely disregard traffic numbers and primarily rely on the opinion of their peers to define journalistic quality. Meanwhile, French journalists fixate on internet traffic and view these numbers as a sign of their resonance in the public sphere. Christin offers cultural and historical explanations for these disparities, arguing that distinct journalistic traditions structure how journalists make sense of digital measurements in the two countries.

Contrary to the popular belief that analytics and algorithms are globally homogenizing forces, Metrics at Work shows that computational technologies can have surprisingly divergent ramifications for work and organizations worldwide…(More)”.

Why people believe misinformation and resist correction

TechPolicyPress: “…In Nature, a team of nine researchers from the fields of psychology, mass media & communication have published a review of available research on the factors that lead people to “form or endorse misinformed views, and the psychological barriers” to changing their minds….

The authors summarize what is known about a variety of drivers of false beliefs, noting that they “generally arise through the same mechanisms that establish accurate beliefs” and the human weakness for trusting the “gut”. For a variety of reasons, people develop shortcuts when processing information, often defaulting to conclusions rather than evaluating new information critically. A complex set of variables related to information sources, emotional factors and a variety of other cues can lead to the formation of false beliefs. And, people often share information with little focus on its veracity, but rather to accomplish other goals- from self-promotion to signaling group membership to simply sating a desire to ‘watch the world burn’.

Source: Nature Reviews: Psychology, Volume 1, January 022

Barriers to belief revision are also complex, since “the original information is not simply erased or replaced” once corrective information is introduced. There is evidence that misinformation can be “reactivated and retrieved” even after an individual receives accurate information that contradicts it. A variety of factors affect whether correct information can win out. One theory looks at how information is integrated in a person’s “memory network”. Another complementary theory looks at “selective retrieval” and is backed up by neuro-imaging evidence…(More)”.

The Crowdsourced Panopticon

Book by Jeremy Weissman: “Behind the omnipresent screens of our laptops and smartphones, a digitally networked public has quickly grown larger than the population of any nation on Earth. On the flipside, in front of the ubiquitous recording devices that saturate our lives, individuals are hyper-exposed through a worldwide online broadcast that encourages the public to watch, judge, rate, and rank people’s lives. The interplay of these two forces – the invisibility of the anonymous crowd and the exposure of the individual before that crowd – is a central focus of this book. Informed by critiques of conformity and mass media by some of the greatest philosophers of the past two centuries, as well as by a wide range of historical and empirical studies, Weissman helps shed light on what may happen when our lives are increasingly broadcast online for everyone all the time, to be judged by the global community…(More)”.