How harmful is social media?


Gideon Lewis-Kraus in The New Yorker: “In April, the social psychologist Jonathan Haidt published an essay in The Atlantic in which he sought to explain, as the piece’s title had it, “Why the Past 10 Years of American Life Have Been Uniquely Stupid.” Anyone familiar with Haidt’s work in the past half decade could have anticipated his answer: social media. Although Haidt concedes that political polarization and factional enmity long predate the rise of the platforms, and that there are plenty of other factors involved, he believes that the tools of virality—Facebook’s Like and Share buttons, Twitter’s Retweet function—have algorithmically and irrevocably corroded public life. He has determined that a great historical discontinuity can be dated with some precision to the period between 2010 and 2014, when these features became widely available on phones….

After Haidt’s piece was published, the Google Doc—“Social Media and Political Dysfunction: A Collaborative Review”—was made available to the public. Comments piled up, and a new section was added, at the end, to include a miscellany of Twitter threads and Substack essays that appeared in response to Haidt’s interpretation of the evidence. Some colleagues and kibbitzers agreed with Haidt. But others, though they might have shared his basic intuition that something in our experience of social media was amiss, drew upon the same data set to reach less definitive conclusions, or even mildly contradictory ones. Even after the initial flurry of responses to Haidt’s article disappeared into social-media memory, the document, insofar as it captured the state of the social-media debate, remained a lively artifact.

Near the end of the collaborative project’s introduction, the authors warn, “We caution readers not to simply add up the number of studies on each side and declare one side the winner.” The document runs to more than a hundred and fifty pages, and for each question there are affirmative and dissenting studies, as well as some that indicate mixed results. According to one paper, “Political expressions on social media and the online forum were found to (a) reinforce the expressers’ partisan thought process and (b) harden their pre-existing political preferences,” but, according to another, which used data collected during the 2016 election, “Over the course of the campaign, we found media use and attitudes remained relatively stable. Our results also showed that Facebook news use was related to modest over-time spiral of depolarization. Furthermore, we found that people who use Facebook for news were more likely to view both pro- and counter-attitudinal news in each wave. Our results indicated that counter-attitudinal exposure increased over time, which resulted in depolarization.” If results like these seem incompatible, a perplexed reader is given recourse to a study that says, “Our findings indicate that political polarization on social media cannot be conceptualized as a unified phenomenon, as there are significant cross-platform differences.”…(More)”.

Impediment of Infodemic on Disaster Policy Efficacy: Insights from Location Big Data


Paper by Xiaobin Shen, Natasha Zhang Foutz, and Beibei Li: “Infodemics impede the efficacy of business and public policies, particularly in disastrous times when high-quality information is in the greatest demand. This research proposes a multi-faceted conceptual framework to characterize an infodemic and then empirically assesses its impact on the core mitigation policy of a latest prominent disaster, the COVID-19 pandemic. Analyzing a half million records of COVID-related news media and social media, as well as .2 billion records of location data, via a multitude of methodologies, including text mining and spatio-temporal analytics, we uncover a number of interesting findings. First, the volume of the COVID information incurs an inverted-U-shaped impact on individuals’ compliance with the lockdown policy. That is, a smaller volume encourages the policy compliance, whereas an overwhelming volume discourages compliance, revealing negative ramifications of excessive information about a disaster. Second, novel information boosts policy compliance, signifying the value of offering original and distinctive, instead of redundant, information to the public during a disaster. Third, misinformation exhibits a U-shaped influence unexplored by the literature, deterring policy compliance until a larger amount surfaces, diminishing informational value, escalating public uncertainty. Overall, these findings demonstrate the power of information technology, such as media analytics and location sensing, in disaster management. They also illuminate the significance of strategic information management during disasters and the imperative need for cohesive efforts across governments, media, technology platforms, and the general public to curb future infodemics…(More)”.

Magic Numbers


Essay by Alana Mohamed: “…The willingness to believe in the “algorithm” as though it were a kind of god is not entirely surprising. New technologies have long been incorporated into spiritual practices, especially during times of mass crisis. In the mid-to-late 19th century, emergent technologies from the lightbulb to the telephone called the limitations of the physical world into question. New spiritual leaders, beliefs, and full-blown religions cropped up, inspired by the invisible electric currents powering scientific developments. If we could summon light and sound by unseen forces, what other invisible specters lurked beneath the surface of everyday life?

The casualties of the U.S. Civil War gave birth to new spiritual practices, including contacting the dead through spirit photography and the telegraph dial. Practices like table rapping used fairly low-tech objects — walls, tables — as conduits to the spirit realm, where ghosts would tap out responses. The rapping noise was reminiscent of Morse code, leading to comparisons with the telegraph. In fact, in 1854, a U.S. senator campaigned for a scientific commission that would establish a “spiritual telegraph” between our world and the spiritual world. (He was unsuccessful.)

William Mumler’s practice of spirit photography is perhaps better known. Mumler claimed that he could photograph a dead relative or loved one when photographing a living subject. His most famous photograph depicts the widowed Mary Todd Lincoln with the shadowy image of her decreased husband holding her shoulder. Though widely debunked as a fraud, the practice itself continued on, even earning a book written in its defense by Sir Arthur Conan Doyle. 

Similar investigations into otherworldly communication and esoteric knowledge would be mainstreamed after World War I, bolstered by the creation of the radio and wireless telegraphy. Amid a boom in table rapping, spirit photography, and the host of usual suspects, Thomas Edison spoke openly about his hopes to create a machine, based on early gramophones, to communicate with the dead, specifically referencing the work of mediums and spiritualists. Radio, in particular, provided a new way to think about the physical and spiritual worlds, with its language of tuning in, channels, frequencies, and wavelengths still employed today…(More)”.

The Impact of Public Transparency Infrastructure on Data Journalism: A Comparative Analysis between Information-Rich and Information-Poor Countries


Paper by Lindita Camaj, Jason Martin & Gerry Lanosga: “This study surveyed data journalists from 71 countries to compare how public transparency infrastructure influences data journalism practices around the world. Emphasizing cross-national differences in data access, results suggest that technical and economic inequalities that affect the implementation of the open data infrastructures can produce unequal data access and widen the gap in data journalism practices between information-rich and information-poor countries. Further, while journalists operating in open data infrastructure are more likely to exhibit a dependency on pre-processed public data, journalists operating in closed data infrastructures are more likely to use Access to Information legislation. We discuss the implications of our results for understanding the development of data journalism models in cross-national contexts…(More)”

Social Engineering: How Crowdmasters, Phreaks, Hackers, and Trolls Created a New Form of Manipulative Communication


Open Access book by Robert W. Gehl, and Sean T Lawson: “Manipulative communication—from early twentieth-century propaganda to today’s online con artistry—examined through the lens of social engineering. The United States is awash in manipulated information about everything from election results to the effectiveness of medical treatments. Corporate social media is an especially good channel for manipulative communication, with Facebook a particularly willing vehicle for it. In Social Engineering, Robert Gehl and Sean Lawson show that online misinformation has its roots in earlier techniques: mass social engineering of the early twentieth century and interpersonal hacker social engineering of the 1970s, converging today into what they call “masspersonal social engineering.” As Gehl and Lawson trace contemporary manipulative communication back to earlier forms of social engineering, possibilities for amelioration become clearer.

The authors show how specific manipulative communication practices are a mixture of information gathering, deception, and truth-indifferent statements, all with the instrumental goal of getting people to take actions the social engineer wants them to. Yet the term “fake news,” they claim, reduces everything to a true/false binary that fails to encompass the complexity of manipulative communication or to map onto many of its practices. They pay special attention to concepts and terms used by hacker social engineers, including the hacker concept of “bullshitting,” which the authors describe as a truth-indifferent mix of deception, accuracy, and sociability. They conclude with recommendations for how society can undermine masspersonal social engineering and move toward healthier democratic deliberation…(More)”.

The Power of Platforms: Shaping Media and Society


Book by Rasmus Kleis Nielsen and Sarah Anne Ganter: “More people today get news via Facebook and Google than from any news organization in history, and smaller platforms like Twitter serve news to more users than all but the biggest media companies. In The Power of Platforms, Rasmus Kleis Nielsen and Sarah Anne Ganter draw on original interviews and other qualitative evidence to analyze the “platform power” that a few technology companies have come to exercise in public life, the reservations publishers have about platforms, as well as the reasons why publishers often embrace them nonetheless.

Nielsen and Ganter trace how relations between publishers and platforms have evolved across the United States, France, Germany, and the United Kingdom. They identify the new, distinct relational and generative forms of power that platforms exercise as people increasingly rely on them to find and access news. Most of the news content we rely on is still produced by journalists working for news organizations, but Nielsen and Ganter chronicle rapid change in the ways in which we discover news, how it is distributed, where decisions are made on what to display (and what not), and in who profits from these flows of information. By examining the different ways publishers have responded to these changes and how various platform companies have in turn handled the increasingly important and controversial role they play in society, The Power of Platforms draws out the implications of a fundamental feature of the contemporary world that we all need to understand: previously powerful and relatively independent institutions like the news media are increasingly in a position similar to that of ordinary individual users, simultaneously empowered by and dependent upon a small number of centrally placed and powerful platforms…(More)”.

Lexota


Press Release: “Today, Global Partners Digital (GPD), the Centre for Human Rights at the University of Pretoria (CHR), Article 19 West Africa, the Collaboration on International ICT Policy in East and Southern Africa (CIPESA) and PROTEGE QV jointly launch LEXOTA—Laws on Expression Online: Tracker and Analysis, a new interactive tool to help human rights defenders track and analyse government responses to online disinformation across Sub-Saharan Africa. 

Expanding on work started in 2020, LEXOTA offers a comprehensive overview of laws, policies and other government actions on disinformation in every country in Sub-Saharan Africa. The tool is powered by multilingual data and context-sensitive insight from civil society organisations and uses a detailed framework to assess whether government responses to disinformation are human rights-respecting. A dynamic comparison feature empowers users to examine the regulatory approaches of different countries and to compare how different policy responses measure up against human rights standards, providing them with insights into trends across the region as well as the option to examine country-specific analyses. 

In recent years, governments in Sub-Saharan Africa have increasingly responded to disinformation through content-based restrictions and regulations, which often pose significant risks to individuals’ right to freedom of expression. LEXOTA was developed to support those working to defend internet freedom and freedom of expression across the region, by making data on these government actions accessible and comparable…(More)”.

The European Data Protection Supervisor (EDPS) launches pilot phase of two social media platforms


Press Release: “The European Data Protection Supervisor (EDPS) launches today the public pilot phase of two social media platforms: EU Voice and EU Video.

EU institutions, bodies, offices and agencies (EUIs) participating in the pilot phase of these platforms will be able to interact with the public by sharing short texts, images and videos on EU Voice; and by sharing, uploading, commenting videos and podcasts on EU Video.

The two platforms are part of decentralised, free and open-source social media networks that connect users in a privacy-oriented environment, based on Mastodon and PeerTube software. By launching the pilot phase of EU Voice and EU Video, the EDPS aims to contribute to the European Union’s strategy for data and digital sovereignty to foster Europe’s independence in the digital world.

Wojciech Wiewiórowski, EDPS, said“With the pilot launch of EU Voice and EU Video, we aim to offer alternative social media platforms that prioritise individuals and their rights to privacy and data protection. In concrete terms this means, for example, that EU Voice and EU Video do not rely on transfers of personal data to countries outside the European Union and the European Economic Area; there are no advertisements on the platforms; and there is no profiling of individuals that may use the platforms. These measures, amongst others, give individuals the choice on and control over how their personal data is used.”

The EDPS and the European Commission’s Directorate General for Informatics (DIGIT) have collaborated closely throughout the development of EU Voice and EU Video. In line with the goals of the Commission’s Open Source Software Strategy 2020 – 2023, DIGIT’s technical assistance to the EDPS proves the importance of inter-institutional cooperation on open source as an enabler of privacy rights and data protection, therefore contributing to the EU’s technological sovereignty.

The launch of the pilot phase of EU Voice and EU Video will help the EDPS to test the platforms in practice by collecting feedback from participating EUIs. The EDPS hopes that this first step will mark a continuity in the use of privacy-compliant social media platforms…(More)”.

Shadowbanning Is Big Tech’s Big Problem


Essay by Gabriel Nicholas: “Sometimes, it feels like everyone on the internet thinks they’ve been shadowbanned. Republican politicians have been accusing Twitter of shadowbanning—that is, quietly suppressing their activity on the site—since at least 2018, when for a brief period, the service stopped autofilling the usernames of Representatives Jim Jordan, Mark Meadows, and Matt Gaetz, as well as other prominent Republicans, in its search bar. Black Lives Matter activists have been accusing TikTok of shadowbanning since 2020, when, at the height of the George Floyd protests, it sharply reduced how frequently their videos appeared on users’ “For You” pages. …When the word shadowban first appeared in the web-forum backwaters of the early 2000s, it meant something more specific. It was a way for online-community moderators to deal with trolls, shitposters, spam bots, and anyone else they deemed harmful: by making their posts invisible to everyone but the posters themselves. But throughout the 2010s, as the social web grew into the world’s primary means of sharing information and as content moderation became infinitely more complicated, the word became more common, and much more muddled. Today, people use shadowban to refer to the wide range of ways platforms may remove or reduce the visibility of their content without telling them….

According to new research I conducted at the Center for Democracy and Technology (CDT), nearly one in 10 U.S. social-media users believes they have been shadowbanned, and most often they believe it is for their political beliefs or their views on social issues. In two dozen interviews I held with people who thought they had been shadowbanned or worked with people who thought they had, I repeatedly heard users say that shadowbanning made them feel not just isolated from online discourse, but targeted, by a sort of mysterious cabal, for breaking a rule they didn’t know existed. It’s not hard to imagine what happens when social-media users believe they are victims of conspiracy…(More)”.

Internet ‘algospeak’ is changing our language in real time, from ‘nip nops’ to ‘le dollar bean’


Article by Taylor Lorenz: “Algospeak” is becoming increasingly common across the Internet as people seek to bypass content moderation filters on social media platforms such as TikTok, YouTube, Instagram and Twitch.

Algospeak refers to code words or turns of phrase users have adopted in an effort to create a brand-safe lexicon that will avoid getting their posts removed or down-ranked by content moderation systems. For instance, in many online videos, it’s common to say “unalive” rather than “dead,” “SA” instead of “sexual assault,” or “spicy eggplant” instead of “vibrator.”

As the pandemic pushed more people to communicate and express themselves online, algorithmic content moderation systems have had an unprecedented impact on the words we choose, particularly on TikTok, and given rise to a new form of internet-driven Aesopian language.

Unlike other mainstream social platforms, the primary way content is distributed on TikTok is through an algorithmically curated “For You” page; having followers doesn’t guarantee people will see your content. This shift has led average users to tailor their videos primarily toward the algorithm, rather than a following, which means abiding by content moderation rules is more crucial than ever.

When the pandemic broke out, people on TikTok and other apps began referring to it as the “Backstreet Boys reunion tour” or calling it the “panini” or “panda express” as platforms down-ranked videos mentioning the pandemic by name in an effort to combat misinformation. When young people began to discuss struggling with mental health, they talked about “becoming unalive” in order to have frank conversations about suicide without algorithmic punishment. Sex workers, who have long been censored by moderation systems, refer to themselves on TikTok as “accountants” and use the corn emoji as a substitute for the word “porn.”

As discussions of major events are filtered through algorithmic content delivery systems, more users are bending their language. Recently, in discussing the invasion of Ukraine, people on YouTube and TikTok have used the sunflower emoji to signify the country. When encouraging fans to follow them elsewhere, users will say “blink in lio” for “link in bio.”

Euphemisms are especially common in radicalized or harmful communities. Pro-anorexia eating disorder communities have long adopted variations on moderated words to evade restrictions. One paper from the School of Interactive Computing, Georgia Institute of Technology found that the complexity of such variants even increased over time. Last year, anti-vaccine groups on Facebook began changing their names to “dance party” or “dinner party” and anti-vaccine influencers on Instagram used similar code words, referring to vaccinated people as “swimmers.”

Tailoring language to avoid scrutiny predates the Internet. Many religions have avoided uttering the devil’s name lest they summon him, while people living in repressive regimes developed code words to discuss taboo topics…(More)”.