Paper by Lindita Camaj, Jason Martin & Gerry Lanosga: “This study surveyed data journalists from 71 countries to compare how public transparency infrastructure influences data journalism practices around the world. Emphasizing cross-national differences in data access, results suggest that technical and economic inequalities that affect the implementation of the open data infrastructures can produce unequal data access and widen the gap in data journalism practices between information-rich and information-poor countries. Further, while journalists operating in open data infrastructure are more likely to exhibit a dependency on pre-processed public data, journalists operating in closed data infrastructures are more likely to use Access to Information legislation. We discuss the implications of our results for understanding the development of data journalism models in cross-national contexts…(More)”
Social Engineering: How Crowdmasters, Phreaks, Hackers, and Trolls Created a New Form of Manipulative Communication
Open Access book by Robert W. Gehl, and Sean T Lawson: “Manipulative communication—from early twentieth-century propaganda to today’s online con artistry—examined through the lens of social engineering. The United States is awash in manipulated information about everything from election results to the effectiveness of medical treatments. Corporate social media is an especially good channel for manipulative communication, with Facebook a particularly willing vehicle for it. In Social Engineering, Robert Gehl and Sean Lawson show that online misinformation has its roots in earlier techniques: mass social engineering of the early twentieth century and interpersonal hacker social engineering of the 1970s, converging today into what they call “masspersonal social engineering.” As Gehl and Lawson trace contemporary manipulative communication back to earlier forms of social engineering, possibilities for amelioration become clearer.
The authors show how specific manipulative communication practices are a mixture of information gathering, deception, and truth-indifferent statements, all with the instrumental goal of getting people to take actions the social engineer wants them to. Yet the term “fake news,” they claim, reduces everything to a true/false binary that fails to encompass the complexity of manipulative communication or to map onto many of its practices. They pay special attention to concepts and terms used by hacker social engineers, including the hacker concept of “bullshitting,” which the authors describe as a truth-indifferent mix of deception, accuracy, and sociability. They conclude with recommendations for how society can undermine masspersonal social engineering and move toward healthier democratic deliberation…(More)”.
The Power of Platforms: Shaping Media and Society
Book by Rasmus Kleis Nielsen and Sarah Anne Ganter: “More people today get news via Facebook and Google than from any news organization in history, and smaller platforms like Twitter serve news to more users than all but the biggest media companies. In The Power of Platforms, Rasmus Kleis Nielsen and Sarah Anne Ganter draw on original interviews and other qualitative evidence to analyze the “platform power” that a few technology companies have come to exercise in public life, the reservations publishers have about platforms, as well as the reasons why publishers often embrace them nonetheless.
Nielsen and Ganter trace how relations between publishers and platforms have evolved across the United States, France, Germany, and the United Kingdom. They identify the new, distinct relational and generative forms of power that platforms exercise as people increasingly rely on them to find and access news. Most of the news content we rely on is still produced by journalists working for news organizations, but Nielsen and Ganter chronicle rapid change in the ways in which we discover news, how it is distributed, where decisions are made on what to display (and what not), and in who profits from these flows of information. By examining the different ways publishers have responded to these changes and how various platform companies have in turn handled the increasingly important and controversial role they play in society, The Power of Platforms draws out the implications of a fundamental feature of the contemporary world that we all need to understand: previously powerful and relatively independent institutions like the news media are increasingly in a position similar to that of ordinary individual users, simultaneously empowered by and dependent upon a small number of centrally placed and powerful platforms…(More)”.
Lexota
Press Release: “Today, Global Partners Digital (GPD), the Centre for Human Rights at the University of Pretoria (CHR), Article 19 West Africa, the Collaboration on International ICT Policy in East and Southern Africa (CIPESA) and PROTEGE QV jointly launch LEXOTA—Laws on Expression Online: Tracker and Analysis, a new interactive tool to help human rights defenders track and analyse government responses to online disinformation across Sub-Saharan Africa.
Expanding on work started in 2020, LEXOTA offers a comprehensive overview of laws, policies and other government actions on disinformation in every country in Sub-Saharan Africa. The tool is powered by multilingual data and context-sensitive insight from civil society organisations and uses a detailed framework to assess whether government responses to disinformation are human rights-respecting. A dynamic comparison feature empowers users to examine the regulatory approaches of different countries and to compare how different policy responses measure up against human rights standards, providing them with insights into trends across the region as well as the option to examine country-specific analyses.
In recent years, governments in Sub-Saharan Africa have increasingly responded to disinformation through content-based restrictions and regulations, which often pose significant risks to individuals’ right to freedom of expression. LEXOTA was developed to support those working to defend internet freedom and freedom of expression across the region, by making data on these government actions accessible and comparable…(More)”.
The European Data Protection Supervisor (EDPS) launches pilot phase of two social media platforms
Press Release: “The European Data Protection Supervisor (EDPS) launches today the public pilot phase of two social media platforms: EU Voice and EU Video.
EU institutions, bodies, offices and agencies (EUIs) participating in the pilot phase of these platforms will be able to interact with the public by sharing short texts, images and videos on EU Voice; and by sharing, uploading, commenting videos and podcasts on EU Video.
The two platforms are part of decentralised, free and open-source social media networks that connect users in a privacy-oriented environment, based on Mastodon and PeerTube software. By launching the pilot phase of EU Voice and EU Video, the EDPS aims to contribute to the European Union’s strategy for data and digital sovereignty to foster Europe’s independence in the digital world.
Wojciech Wiewiórowski, EDPS, said: “With the pilot launch of EU Voice and EU Video, we aim to offer alternative social media platforms that prioritise individuals and their rights to privacy and data protection. In concrete terms this means, for example, that EU Voice and EU Video do not rely on transfers of personal data to countries outside the European Union and the European Economic Area; there are no advertisements on the platforms; and there is no profiling of individuals that may use the platforms. These measures, amongst others, give individuals the choice on and control over how their personal data is used.”
The EDPS and the European Commission’s Directorate General for Informatics (DIGIT) have collaborated closely throughout the development of EU Voice and EU Video. In line with the goals of the Commission’s Open Source Software Strategy 2020 – 2023, DIGIT’s technical assistance to the EDPS proves the importance of inter-institutional cooperation on open source as an enabler of privacy rights and data protection, therefore contributing to the EU’s technological sovereignty.
The launch of the pilot phase of EU Voice and EU Video will help the EDPS to test the platforms in practice by collecting feedback from participating EUIs. The EDPS hopes that this first step will mark a continuity in the use of privacy-compliant social media platforms…(More)”.
Shadowbanning Is Big Tech’s Big Problem
Essay by Gabriel Nicholas: “Sometimes, it feels like everyone on the internet thinks they’ve been shadowbanned. Republican politicians have been accusing Twitter of shadowbanning—that is, quietly suppressing their activity on the site—since at least 2018, when for a brief period, the service stopped autofilling the usernames of Representatives Jim Jordan, Mark Meadows, and Matt Gaetz, as well as other prominent Republicans, in its search bar. Black Lives Matter activists have been accusing TikTok of shadowbanning since 2020, when, at the height of the George Floyd protests, it sharply reduced how frequently their videos appeared on users’ “For You” pages. …When the word shadowban first appeared in the web-forum backwaters of the early 2000s, it meant something more specific. It was a way for online-community moderators to deal with trolls, shitposters, spam bots, and anyone else they deemed harmful: by making their posts invisible to everyone but the posters themselves. But throughout the 2010s, as the social web grew into the world’s primary means of sharing information and as content moderation became infinitely more complicated, the word became more common, and much more muddled. Today, people use shadowban to refer to the wide range of ways platforms may remove or reduce the visibility of their content without telling them….
According to new research I conducted at the Center for Democracy and Technology (CDT), nearly one in 10 U.S. social-media users believes they have been shadowbanned, and most often they believe it is for their political beliefs or their views on social issues. In two dozen interviews I held with people who thought they had been shadowbanned or worked with people who thought they had, I repeatedly heard users say that shadowbanning made them feel not just isolated from online discourse, but targeted, by a sort of mysterious cabal, for breaking a rule they didn’t know existed. It’s not hard to imagine what happens when social-media users believe they are victims of conspiracy…(More)”.
Internet ‘algospeak’ is changing our language in real time, from ‘nip nops’ to ‘le dollar bean’
Article by Taylor Lorenz: “Algospeak” is becoming increasingly common across the Internet as people seek to bypass content moderation filters on social media platforms such as TikTok, YouTube, Instagram and Twitch.
Algospeak refers to code words or turns of phrase users have adopted in an effort to create a brand-safe lexicon that will avoid getting their posts removed or down-ranked by content moderation systems. For instance, in many online videos, it’s common to say “unalive” rather than “dead,” “SA” instead of “sexual assault,” or “spicy eggplant” instead of “vibrator.”
As the pandemic pushed more people to communicate and express themselves online, algorithmic content moderation systems have had an unprecedented impact on the words we choose, particularly on TikTok, and given rise to a new form of internet-driven Aesopian language.
Unlike other mainstream social platforms, the primary way content is distributed on TikTok is through an algorithmically curated “For You” page; having followers doesn’t guarantee people will see your content. This shift has led average users to tailor their videos primarily toward the algorithm, rather than a following, which means abiding by content moderation rules is more crucial than ever.
When the pandemic broke out, people on TikTok and other apps began referring to it as the “Backstreet Boys reunion tour” or calling it the “panini” or “panda express” as platforms down-ranked videos mentioning the pandemic by name in an effort to combat misinformation. When young people began to discuss struggling with mental health, they talked about “becoming unalive” in order to have frank conversations about suicide without algorithmic punishment. Sex workers, who have long been censored by moderation systems, refer to themselves on TikTok as “accountants” and use the corn emoji as a substitute for the word “porn.”
As discussions of major events are filtered through algorithmic content delivery systems, more users are bending their language. Recently, in discussing the invasion of Ukraine, people on YouTube and TikTok have used the sunflower emoji to signify the country. When encouraging fans to follow them elsewhere, users will say “blink in lio” for “link in bio.”
Euphemisms are especially common in radicalized or harmful communities. Pro-anorexia eating disorder communities have long adopted variations on moderated words to evade restrictions. One paper from the School of Interactive Computing, Georgia Institute of Technology found that the complexity of such variants even increased over time. Last year, anti-vaccine groups on Facebook began changing their names to “dance party” or “dinner party” and anti-vaccine influencers on Instagram used similar code words, referring to vaccinated people as “swimmers.”
Tailoring language to avoid scrutiny predates the Internet. Many religions have avoided uttering the devil’s name lest they summon him, while people living in repressive regimes developed code words to discuss taboo topics…(More)”.
The Russian invasion shows how digital technologies have become involved in all aspects of war
Article by Katharina Niemeyer, Dominique Trudel, Heidi J.S. Tworek, Maria Silina and Svitlana Matviyenko: “Since Russia invaded Ukraine, we keep hearing that this war is like no other; because Ukrainians have cellphones and access to social media platforms, the traditional control of information and propaganda cannot work and people are able to see through the fog of war.
As communications scholars and historians, it is important to add nuance to such claims. The question is not so much what is “new” in this war, but rather to understand its specific media dynamics. One important facet of this war is the interplay between old and new media — the many loops that go from Twitter to television to TikTok, and back and forth.
We have moved away from a relatively static communication model, where journalists report on the news within predetermined constraints and formats, to intense fragmentation and even participation. Information about the war becomes content, and users contribute to its circulation by sharing and commenting online…(More)”.
Social-media reform is flying blind
Paper by Chris Bail: “As Russia continues its ruthless war in Ukraine, pundits are speculating what social-media platforms might have done years ago to undermine propaganda well before the attack. Amid accusations that social media fuels political violence — and even genocide — it is easy to forget that Facebook evolved from a site for university students to rate each other’s physical attractiveness. Instagram was founded to facilitate alcohol-based gatherings. TikTok and YouTube were built to share funny videos.
The world’s social-media platforms are now among the most important forums for discussing urgent social problems, such as Russia’s invasion of Ukraine, COVID-19 and climate change. Techno-idealists continue to promise that these platforms will bring the world together — despite mounting evidence that they are pulling us apart.
Efforts to regulate social media have largely stalled, perhaps because no one knows what something better would look like. If we could hit ‘reset’ and redesign our platforms from scratch, could we make them strengthen civil society?
Researchers have a hard time studying such questions. Most corporations want to ensure studies serve their business model and avoid controversy. They don’t share much data. And getting answers requires not just making observations, but doing experiments.
In 2017, I co-founded the Polarization Lab at Duke University in Durham, North Carolina. We have created a social-media platform for scientific research. On it, we can turn features on and off, and introduce new ones, to identify those that improve social cohesion. We have recruited thousands of people to interact with each other on these platforms, alongside bots that can simulate social-media users.
We hope our effort will help to evaluate some of the most basic premises of social media. For example, tech leaders have long measured success by the number of connections people have. Anthropologist Robin Dunbar has suggested that humans struggle to maintain meaningful relationships with more than 150 people. Experiments could encourage some social-media users to create deeper connections with a small group of users while allowing others to connect with anyone. Researchers could investigate the optimal number of connections in different situations, to work out how to optimize breadth of relationships without sacrificing depth.
A related question is whether social-media platforms should be customized for different societies or groups. Although today’s platforms seem to have largely negative effects on US and Western-Europe politics, the opposite might be true in emerging democracies (P. Lorenz-Spreen et al. Preprint at https://doi.org/hmq2; 2021). One study suggested that Facebook could reduce ethnic tensions in Bosnia–Herzegovina (N. Asimovic et al. Proc. Natl Acad. Sci. USA 118, e2022819118; 2021), and social media has helped Ukraine to rally support around the world for its resistance….(More)”.
The need to represent: How AI can help counter gender disparity in the news
Blog by Sabrina Argoub: “For the first in our new series of JournalismAI Community Workshops, we decided to look at three recent projects that demonstrate how AI can help raise awareness on issues with misrepresentation of women in the news.
The Political Misogynistic Discourse Monitor is a web application and API that journalists from AzMina, La Nación, CLIP, and DataCrítica developed to uncover hate speech against women on Twitter.
When Women Make Headlines is an analysis by The Pudding of the (mis)representation of women in news headlines, and how it has changed over time.
In the AIJO project, journalists from eight different organisations worked together to identify and mitigate biases in gender representation in news.
We invited, Bàrbara Libório of AzMina, Sahiti Sarva of The Pudding, and Delfina Arambillet of La Nación, to walk us through their projects and share insights on what they learned and how they taught the machine to recognise what constitutes bias and hate speech….(More)”.