Lexota


Press Release: “Today, Global Partners Digital (GPD), the Centre for Human Rights at the University of Pretoria (CHR), Article 19 West Africa, the Collaboration on International ICT Policy in East and Southern Africa (CIPESA) and PROTEGE QV jointly launch LEXOTA—Laws on Expression Online: Tracker and Analysis, a new interactive tool to help human rights defenders track and analyse government responses to online disinformation across Sub-Saharan Africa. 

Expanding on work started in 2020, LEXOTA offers a comprehensive overview of laws, policies and other government actions on disinformation in every country in Sub-Saharan Africa. The tool is powered by multilingual data and context-sensitive insight from civil society organisations and uses a detailed framework to assess whether government responses to disinformation are human rights-respecting. A dynamic comparison feature empowers users to examine the regulatory approaches of different countries and to compare how different policy responses measure up against human rights standards, providing them with insights into trends across the region as well as the option to examine country-specific analyses. 

In recent years, governments in Sub-Saharan Africa have increasingly responded to disinformation through content-based restrictions and regulations, which often pose significant risks to individuals’ right to freedom of expression. LEXOTA was developed to support those working to defend internet freedom and freedom of expression across the region, by making data on these government actions accessible and comparable…(More)”.

The European Data Protection Supervisor (EDPS) launches pilot phase of two social media platforms


Press Release: “The European Data Protection Supervisor (EDPS) launches today the public pilot phase of two social media platforms: EU Voice and EU Video.

EU institutions, bodies, offices and agencies (EUIs) participating in the pilot phase of these platforms will be able to interact with the public by sharing short texts, images and videos on EU Voice; and by sharing, uploading, commenting videos and podcasts on EU Video.

The two platforms are part of decentralised, free and open-source social media networks that connect users in a privacy-oriented environment, based on Mastodon and PeerTube software. By launching the pilot phase of EU Voice and EU Video, the EDPS aims to contribute to the European Union’s strategy for data and digital sovereignty to foster Europe’s independence in the digital world.

Wojciech Wiewiórowski, EDPS, said“With the pilot launch of EU Voice and EU Video, we aim to offer alternative social media platforms that prioritise individuals and their rights to privacy and data protection. In concrete terms this means, for example, that EU Voice and EU Video do not rely on transfers of personal data to countries outside the European Union and the European Economic Area; there are no advertisements on the platforms; and there is no profiling of individuals that may use the platforms. These measures, amongst others, give individuals the choice on and control over how their personal data is used.”

The EDPS and the European Commission’s Directorate General for Informatics (DIGIT) have collaborated closely throughout the development of EU Voice and EU Video. In line with the goals of the Commission’s Open Source Software Strategy 2020 – 2023, DIGIT’s technical assistance to the EDPS proves the importance of inter-institutional cooperation on open source as an enabler of privacy rights and data protection, therefore contributing to the EU’s technological sovereignty.

The launch of the pilot phase of EU Voice and EU Video will help the EDPS to test the platforms in practice by collecting feedback from participating EUIs. The EDPS hopes that this first step will mark a continuity in the use of privacy-compliant social media platforms…(More)”.

Shadowbanning Is Big Tech’s Big Problem


Essay by Gabriel Nicholas: “Sometimes, it feels like everyone on the internet thinks they’ve been shadowbanned. Republican politicians have been accusing Twitter of shadowbanning—that is, quietly suppressing their activity on the site—since at least 2018, when for a brief period, the service stopped autofilling the usernames of Representatives Jim Jordan, Mark Meadows, and Matt Gaetz, as well as other prominent Republicans, in its search bar. Black Lives Matter activists have been accusing TikTok of shadowbanning since 2020, when, at the height of the George Floyd protests, it sharply reduced how frequently their videos appeared on users’ “For You” pages. …When the word shadowban first appeared in the web-forum backwaters of the early 2000s, it meant something more specific. It was a way for online-community moderators to deal with trolls, shitposters, spam bots, and anyone else they deemed harmful: by making their posts invisible to everyone but the posters themselves. But throughout the 2010s, as the social web grew into the world’s primary means of sharing information and as content moderation became infinitely more complicated, the word became more common, and much more muddled. Today, people use shadowban to refer to the wide range of ways platforms may remove or reduce the visibility of their content without telling them….

According to new research I conducted at the Center for Democracy and Technology (CDT), nearly one in 10 U.S. social-media users believes they have been shadowbanned, and most often they believe it is for their political beliefs or their views on social issues. In two dozen interviews I held with people who thought they had been shadowbanned or worked with people who thought they had, I repeatedly heard users say that shadowbanning made them feel not just isolated from online discourse, but targeted, by a sort of mysterious cabal, for breaking a rule they didn’t know existed. It’s not hard to imagine what happens when social-media users believe they are victims of conspiracy…(More)”.

Internet ‘algospeak’ is changing our language in real time, from ‘nip nops’ to ‘le dollar bean’


Article by Taylor Lorenz: “Algospeak” is becoming increasingly common across the Internet as people seek to bypass content moderation filters on social media platforms such as TikTok, YouTube, Instagram and Twitch.

Algospeak refers to code words or turns of phrase users have adopted in an effort to create a brand-safe lexicon that will avoid getting their posts removed or down-ranked by content moderation systems. For instance, in many online videos, it’s common to say “unalive” rather than “dead,” “SA” instead of “sexual assault,” or “spicy eggplant” instead of “vibrator.”

As the pandemic pushed more people to communicate and express themselves online, algorithmic content moderation systems have had an unprecedented impact on the words we choose, particularly on TikTok, and given rise to a new form of internet-driven Aesopian language.

Unlike other mainstream social platforms, the primary way content is distributed on TikTok is through an algorithmically curated “For You” page; having followers doesn’t guarantee people will see your content. This shift has led average users to tailor their videos primarily toward the algorithm, rather than a following, which means abiding by content moderation rules is more crucial than ever.

When the pandemic broke out, people on TikTok and other apps began referring to it as the “Backstreet Boys reunion tour” or calling it the “panini” or “panda express” as platforms down-ranked videos mentioning the pandemic by name in an effort to combat misinformation. When young people began to discuss struggling with mental health, they talked about “becoming unalive” in order to have frank conversations about suicide without algorithmic punishment. Sex workers, who have long been censored by moderation systems, refer to themselves on TikTok as “accountants” and use the corn emoji as a substitute for the word “porn.”

As discussions of major events are filtered through algorithmic content delivery systems, more users are bending their language. Recently, in discussing the invasion of Ukraine, people on YouTube and TikTok have used the sunflower emoji to signify the country. When encouraging fans to follow them elsewhere, users will say “blink in lio” for “link in bio.”

Euphemisms are especially common in radicalized or harmful communities. Pro-anorexia eating disorder communities have long adopted variations on moderated words to evade restrictions. One paper from the School of Interactive Computing, Georgia Institute of Technology found that the complexity of such variants even increased over time. Last year, anti-vaccine groups on Facebook began changing their names to “dance party” or “dinner party” and anti-vaccine influencers on Instagram used similar code words, referring to vaccinated people as “swimmers.”

Tailoring language to avoid scrutiny predates the Internet. Many religions have avoided uttering the devil’s name lest they summon him, while people living in repressive regimes developed code words to discuss taboo topics…(More)”.

The Russian invasion shows how digital technologies have become involved in all aspects of war


Article by Katharina Niemeyer, Dominique Trudel, Heidi J.S. Tworek, Maria Silina and Svitlana Matviyenko: “Since Russia invaded Ukraine, we keep hearing that this war is like no other; because Ukrainians have cellphones and access to social media platforms, the traditional control of information and propaganda cannot work and people are able to see through the fog of war.

As communications scholars and historians, it is important to add nuance to such claims. The question is not so much what is “new” in this war, but rather to understand its specific media dynamics. One important facet of this war is the interplay between old and new media — the many loops that go from Twitter to television to TikTok, and back and forth.

We have moved away from a relatively static communication model, where journalists report on the news within predetermined constraints and formats, to intense fragmentation and even participation. Information about the war becomes content, and users contribute to its circulation by sharing and commenting online…(More)”.

Social-media reform is flying blind


Paper by Chris Bail: “As Russia continues its ruthless war in Ukraine, pundits are speculating what social-media platforms might have done years ago to undermine propaganda well before the attack. Amid accusations that social media fuels political violence — and even genocide — it is easy to forget that Facebook evolved from a site for university students to rate each other’s physical attractiveness. Instagram was founded to facilitate alcohol-based gatherings. TikTok and YouTube were built to share funny videos.

The world’s social-media platforms are now among the most important forums for discussing urgent social problems, such as Russia’s invasion of Ukraine, COVID-19 and climate change. Techno-idealists continue to promise that these platforms will bring the world together — despite mounting evidence that they are pulling us apart.

Efforts to regulate social media have largely stalled, perhaps because no one knows what something better would look like. If we could hit ‘reset’ and redesign our platforms from scratch, could we make them strengthen civil society?

Researchers have a hard time studying such questions. Most corporations want to ensure studies serve their business model and avoid controversy. They don’t share much data. And getting answers requires not just making observations, but doing experiments.

In 2017, I co-founded the Polarization Lab at Duke University in Durham, North Carolina. We have created a social-media platform for scientific research. On it, we can turn features on and off, and introduce new ones, to identify those that improve social cohesion. We have recruited thousands of people to interact with each other on these platforms, alongside bots that can simulate social-media users.

We hope our effort will help to evaluate some of the most basic premises of social media. For example, tech leaders have long measured success by the number of connections people have. Anthropologist Robin Dunbar has suggested that humans struggle to maintain meaningful relationships with more than 150 people. Experiments could encourage some social-media users to create deeper connections with a small group of users while allowing others to connect with anyone. Researchers could investigate the optimal number of connections in different situations, to work out how to optimize breadth of relationships without sacrificing depth.

A related question is whether social-media platforms should be customized for different societies or groups. Although today’s platforms seem to have largely negative effects on US and Western-Europe politics, the opposite might be true in emerging democracies (P. Lorenz-Spreen et al. Preprint at https://doi.org/hmq2; 2021). One study suggested that Facebook could reduce ethnic tensions in Bosnia–Herzegovina (N. Asimovic et al. Proc. Natl Acad. Sci. USA 118, e2022819118; 2021), and social media has helped Ukraine to rally support around the world for its resistance….(More)”.

The need to represent: How AI can help counter gender disparity in the news


Blog by Sabrina Argoub: “For the first in our new series of JournalismAI Community Workshops, we decided to look at three recent projects that demonstrate how AI can help raise awareness on issues with misrepresentation of women in the news. 

The Political Misogynistic Discourse Monitor is a web application and API that journalists from AzMina, La Nación, CLIP, and DataCrítica developed to uncover hate speech against women on Twitter.

When Women Make Headlines is an analysis by The Pudding of the (mis)representation of women in news headlines, and how it has changed over time. 

In the AIJO project, journalists from eight different organisations worked together to identify and mitigate biases in gender representation in news. 

We invited, Bàrbara Libório of AzMina, Sahiti Sarva of The Pudding, and Delfina Arambillet of La Nación, to walk us through their projects and share insights on what they learned and how they taught the machine to recognise what constitutes bias and hate speech….(More)”.

Controversy Mapping: A Field Guide


Book by Tommaso Venturini, and Anders Kristian Munk: “As disputes concerning the environment, the economy, and pandemics occupy public debate, we need to learn to navigate matters of public concern when facts are in doubt and expertise is contested.

Controversy Mapping is the first book to introduce readers to the observation and representation of contested issues on digital media. Drawing on actor-network theory and digital methods, Venturini and Munk outline the conceptual underpinnings and the many tools and techniques of controversy mapping. They review its history in science and technology studies, discuss its methodological potential, and unfold its political implications. Through a range of cases and examples, they demonstrate how to chart actors and issues using digital fieldwork and computational techniques. A preface by Richard Rogers and an interview with Bruno Latour are also included.

A crucial field guide and hands-on companion for the digital age, Controversy Mapping is an indispensable resource for students and scholars of media and communication, as well as activists, journalists, citizens, and decision makers…(More)”.

How to avoid sharing bad information about Russia’s invasion of Ukraine


Abby Ohlheiser at MIT Technology Review: “The fast-paced online coverage of the Russian invasion of Ukraine on Wednesday followed a pattern that’s become familiar in other recent crises that have unfolded around the world. Photos, videos, and other information are posted and reshared across platforms much faster than they can be verified.

The result is that falsehoods are mistaken for truth and amplified, even by well-intentioned people. This can help bad actors to terrorize innocent civilians or advance disturbing ideologies, causing real harm.

Disinformation has been a prominent and explicit part of the Russian government’s campaign to justify the invasion. Russia falsely claimed that Ukrainian forces in Donbas, a city in the southeastern part of the country that harbors a large number of pro-Russian separatists, were planning violent attacks, engaging in antagonistic shelling, and committing genocide. Fake videos of those nonexistent attacks became part of a domestic propaganda campaign. (The US government, meanwhile, has been working to debunk and “prebunk” these lies.)

Meanwhile, even people who are not part of such government campaigns may intentionally share bad, misleading, or false information about the invasion to promote ideological narratives, or simply to harvest clicks, with little care about the harm they’re causing. In other cases, honest mistakes made amid the fog of war take off and go viral….

Your attention matters …

First, realize that what you do online makes a difference. “People often think that because they’re not influencers, they’re not politicians, they’re not journalists, that what they do [online] doesn’t matter,” Whitney Phillips, an assistant professor of communication and rhetorical studies at Syracuse University, told me in 2020. But it does matter. Sharing dubious information with even a small circle of friends and family can lead to its wider dissemination.

… and so do your angry quote tweets and duets.

While an urgent news story is developing, well-meaning people may quote, tweet, share, or duet with a post on social media to challenge and condemn it. Twitter and Facebook have introduced new rules, moderation tactics, and fact-checking provisions to try to combat misinformation. But interacting with misinformation at all risks amplifying the content you’re trying to minimize, because it signals to the platform that you find it interesting. Instead of engaging with a post you know to be wrong, try flagging it for review by the platform where you saw it.

Stop.

Mike Caulfield, a digital literacy expert, developed a method for evaluating online information that he calls SIFT: “Stop, Investigate the source, Find better coverage, and Trace claims, quotes, and media to the original context.” When it comes to news about Ukraine, he says, the emphasis should be on “Stop”—that is, pause before you react to or share what you’re seeing….(More)”.

Russian disinformation frenzy seeds groundwork for Ukraine invasion


Zachary Basu and Sara Fischer at Axios: “Russia is testing its agility at weaponizing state media to win backing at home, in occupied territories in eastern Ukraine and with sympathizers abroad for a war of aggression.

The big picture: State media has pivoted from accusing the West of hysterical warnings about a non-existent invasion to pumping out minute-by-minute coverage of the tensions.

Zoom in: NewsGuard, a misinformation tech firm, identified three of the most common false narratives being propagated by Russian state media like RT, Sputnik News, and TASS:

  1. The West staged a coup in 2014 to overthrow the Ukrainian government
  2. Ukrainian politics is dominated by Nazi ideology
  3. Ethnic Russians in Ukraine’s Donbas region have been subjected to genocide

Between the lines: Social media platforms have been on high alert for Russian disinformation that would violate their policies but have less control over private messaging, where some propaganda efforts have moved to avoid detection.

  • A Twitter spokesperson notes: “As we do around major global events, our safety and integrity teams are monitoring for potential risks associated with conflicts to protect the health of the platform.”
  • YouTube’s threat analysis group and trust and safety teams have also been closely monitoring the situation in Ukraine. The platform’s policies ban misleading titles, thumbnails or descriptions that trick users into believing the content is something it is not….(More)”.