Digital Literacy Doesn’t Stop the Spread of Misinformation


Article by David Rand, and Nathaniel Sirlin: “There has been tremendous concern recently over misinformation on social media. It was a pervasive topic during the 2020 U.S. presidential election, continues to be an issue during the COVID-19 pandemic and plays an important part in Russian propaganda efforts in the war on Ukraine. This concern is plenty justified, as the consequences of believing false information are arguably shaping the future of nations and greatly affecting our individual and collective health.

One popular theory about why some people fall for misinformation they encounter online is that they lack digital literacy skills, a nebulous term that describes how a person navigates digital spaces. Someone lacking digital literacy skills, the thinking goes, may be more susceptible to believing—and sharing—false information. As a result, less digitally literate people may play a significant role in the spread of misinformation.

This argument makes intuitive sense. Yet very little research has actually investigated the link between digital literacy and susceptibility to believe false information. There’s even less understanding of the potential link between digital literacy and what people share on social media. As researchers who study the psychology of online misinformation, we wanted to explore these potential associations….

When we looked at the connection between digital literacy and the willingness to share false information with others through social media, however, the results were different. People who were more digitally literate were just as likely to say they’d share false articles as people who lacked digital literacy. Like the first finding, the (lack of) connection between digital literacy and sharing false news was not affected by political party affiliation or whether the topic was politics or the pandemic…(More)”

Meta launches Sphere, an AI knowledge tool based on open web content, used initially to verify citations on Wikipedia


Article by Ingrid Lunden: “Facebook may be infamous for helping to usher in the era of “fake news”, but it’s also tried to find a place for itself in the follow-up: the never-ending battle to combat it. In the latest development on that front, Facebook parent Meta today announced a new tool called Sphere, AI built around the concept of tapping the vast repository of information on the open web to provide a knowledge base for AI and other systems to work. Sphere’s first application, Meta says, is Wikipedia, where it’s being used in a production phase (not live entries) to automatically scan entries and identify when citations in its entries are strongly or weakly supported.

The research team has open sourced Sphere — which is currently based on 134 million public web pages. Here is how it works in action…(More)”.

Datafication of Public Opinion and the Public Sphere


Book by Slavko Splichal: “The book, anchored in stimulating debates about the Enlightenment ideas of publicness, analyses historical changes in the core phenomena of publicness: possibilities, conditions and obstacles to developing a public sphere in which the public reflexively creates, articulates and expresses public opinion. It is focused on the historical transformation from “public use of reason” through the identification of “public opinion” in opinion polls to contemporary opinion mining, in which the Enlightenment idea of public expression of opinion has been displaced by the technology of extracting opinions. It heralds a new critical impetus in theory and research of publicness at a time when critical social thought is sharply criticising and even abandoning the notion of the public sphere, much like the notion of public opinion decades ago, due to its predominantly administrative use…(More)”.

Social Noise: What Is It, and Why Should We Care?


Article by Tara Zimmerman: “As social media, online relationships, and perceived social expectations on platforms such as Facebook play a greater role in people’s lives, a new phenomenon has emerged: social noise. Social noise is the influence of personal and relational factors on information received, which can confuse, distort, or even change the intended message. Influenced by social noise, people are likely to moderate their response to information based on cues regarding what behavior is acceptable or desirable within their social network. This may be done consciously or unconsciously as individuals strive to present themselves in ways that increase their social capital. For example, this might be seen as liking or sharing information posted by a friend or family member as a show of support despite having no strong feelings toward the information itself. Similarly, someone might refrain from liking, sharing, or commenting on information they strongly agree with because they believe others in their social network would disapprove.

This study reveals that social media users’ awareness of observation by others does impact their information behavior. Efforts to craft a personal reputation, build or maintain relationships, pursue important commitments, and manage conflict all influence the observable information behavior of
social media users. As a result, observable social media information behavior may not be an accurate reflection of an individual’s true thoughts and beliefs. This is particularly interesting in light of the role social media plays in the spread of mis- and disinformation…(More)”.

How China uses search engines to spread propaganda


Blog by Jessica Brandt and Valerie Wirtschafter: “Users come to search engines seeking honest answers to their queries. On a wide range of issues—from personal health, to finance, to news—search engines are often the first stop for those looking to get information online. But as authoritarian states like China increasingly use online platforms to disseminate narratives aimed at weakening their democratic competitors, these search engines represent a crucial battleground in their information war with rivals. For Beijing, search engines represent a key—and underappreciated vector—to spread propaganda to audiences around the world.  

On a range of topics of geopolitical importance, Beijing has exploited search engine results to disseminate state-backed media that amplify the Chinese Communist Party’s propaganda. As we demonstrate in our recent report, published by the Brookings Institution in collaboration with the German Marshall Fund’s Alliance for Securing Democracy, users turning to search engines for information on Xinjiang, the site of the CCP’s egregious human rights abuses of the region’s Uyghur minority, or the origins of the coronavirus pandemic are surprisingly likely to encounter articles on these topics published by Chinese state-media outlets. By prominently surfacing this type of content, search engines may play a key role in Beijing’s effort to shape external perceptions, which makes it crucial that platforms—along with authoritative outlets that syndicate state-backed content without clear labeling—do more to address their role in spreading these narratives…(More)“.

The Truth in Fake News: How Disinformation Laws Are Reframing the Concepts of Truth and Accuracy on Digital Platforms


Paper by Paolo Cavaliere: “The European Union’s (EU) strategy to address the spread of disinformation, and most notably the Code of Practice on Disinformation and the forthcoming Digital Services Act, tasks digital platforms with a range of actions to minimise the distribution of issue-based and political adverts that are verifiably false or misleading. This article discusses the implications of the EU’s approach with a focus on its categorical approach, specifically what it means to conceptualise disinformation as a form of advertisement and by what standards digital platforms are expected to assess the truthful or misleading nature of the content they distribute because of this categorisation. The analysis will show how the emerging EU anti-disinformation framework marks a departure from the European Court of Human Rights’ consolidated standards of review for public interest and commercial speech and the tests utilised to assess their accuracy….(More)”.

What Happened to Consensus Reality?


Essay by Jon Askonas: “Do you feel that people you love and respect are going insane? That formerly serious thinkers or commentators are increasingly unhinged, willing to subscribe to wild speculations or even conspiracy theories? Do you feel that, even if there’s some blame to go around, it’s the people on the other side of the aisle who have truly lost their minds? Do you wonder how they can possibly be so blind? Do you feel bewildered by how absurd everything has gotten? Do many of your compatriots seem in some sense unintelligible to you? Do you still consider them your compatriots?

If you feel this way, you are not alone.

We have come a long way from the optimism of the 1990s and 2000s about how the Internet would usher in a new golden era, expanding the domain of the information society to the whole world, with democracy sure to follow. Now we hear that the Internet foments misinformation and erodes democracy. Yet as dire as these warnings are, they are usually followed with suggestions that with more scrutiny on tech CEOs, more aggressive content moderation, and more fact-checking,  Americans might yet return to accepting the same model of reality. Last year, a New York Times article titled “How the Biden Administration Can Help Solve Our Reality Crisis”  suggested creating a federal “reality czar.”

This is a fantasy. The breakup of consensus reality — a shared sense of facts, expectations, and concepts about the world — predates the rise of social media and is driven by much deeper economic and technological currents.

Postwar Americans enjoyed a world where the existence of an objective, knowable reality just seemed like common sense, where alternate facts belonged only to fringe realms of the deluded or deluding. But a shared sense of reality is not natural. It is the product of social institutions that were once so powerful they could hold together a shared picture of the world, but are now well along a path of decline. In the hope of maintaining their power, some have even begun to abandon the project of objectivity altogether.

Attempts to restore consensus reality by force — the current implicit project of the establishment — are doomed to failure. The only question now is how we will adapt our institutions to a life together where a shared picture of the world has been shattered.

This series aims to trace the forces that broke consensus reality. More than a history of the rise and fall of facts, these essays attempt to show a technological reordering of social reality unlike any before encountered, and an accompanying civilizational shift not seen in five hundred years…(More)”.

Meet the fact-checkers decoding Sri Lanka’s meltdown


Article by Nilesh Christopher: “On the evening of May 3, the atmosphere at Galle Face Green, an esplanade along the coastline of Sri Lanka’s capital city of Colombo, was carnivalesque. Parents strolled on sidewalks with toddlers hoisted on their shoulders. Teenagers wearing bandanas played the flute and blew plastic horns. People climbed atop makeshift podiums to address the crowds, greeted by scattered applause. 

The crowd of a few hundred was part of a series of protests that had been underway since mid-March, demanding the ouster of President Gotabaya Rajapaksa. For months, the country has been trapped in a brutal economic crisis: Sri Lanka is currently unable to pay for imports of essentials, such as food, medicines, and fuel. Populist tax cuts, an abrupt ban on fertilizer imports, decimated crop yields, and the collapse of tourism during the pandemic all helped to push the country into the worst economic crisis it has faced since gaining independence in 1948.

The island nation owes nearly $7 billion this year and has next to no foreign reserves left. “We don’t have any gas. We don’t have fuel and some food items. We lose power for three to four hours daily now,” Nalin Chamara, 42, a hotelier protesting with his family and children, told Rest of World. Meanwhile, the presidential family at one point controlled around 70% of the nation’s budget and ran it as a family business, spending billions of dollars of borrowed money on vanity projects, such as an extravagant airport and cricket stadium that now sit almost entirely unused.

Yudhanjaya Wijeratne walked among the Galle Face Green crowds, surveying the scene. He pointed out where demonstrators had jury-rigged their own electricity supply by welding solar panels atop an open truck and connecting them to a battery. The power generated was being used to charge over two dozen smartphones inside a big blue tent, which also contained a library housing 15,000 books. “This is what Sri Lankans will do if you let them build stuff. Fucking build infrastructure from scratch,” Wijeratne said. The protest featured a giant middle finger monument made of plastic bottles, directed at the Rajapaksas. “Our real educational export should be B.Sc. in protest,” Wijeratne said.

Wijeratne, 29 years old, is best known as the author of Numbercaste, a science fiction novel about a near-future world where people’s importance in society is decided based on the all-powerful Number, a credit score determined by their social circle and social network data. But he is also the chief executive of Watchdog, a research collective based in Colombo that uses fact-checking and open source intelligence (OSINT) methods to investigate Sri Lanka’s ongoing crisis. As part of its work, he and his 12-member team of coders, journalists, economists, and students track, time stamp, geolocate, and document videos of protests shared online.

Watchdog’s protest tracker has emerged as the most comprehensive online archive of the historic events unfolding in Sri Lanka. Its data set, which comprises 597 different protests and 49 conflicts, has been used by global news organizations to demonstrate the extent of public pushback.

“[Our] core mission is simple,” Wijeratne told Rest of World. “We want to help people understand the infrastructure they use. The concrete, the laws, the policies, and the social contracts that they live under. We want to help people understand the causality of how they came to be and how they operate.”…(More)”.

How harmful is social media?


Gideon Lewis-Kraus in The New Yorker: “In April, the social psychologist Jonathan Haidt published an essay in The Atlantic in which he sought to explain, as the piece’s title had it, “Why the Past 10 Years of American Life Have Been Uniquely Stupid.” Anyone familiar with Haidt’s work in the past half decade could have anticipated his answer: social media. Although Haidt concedes that political polarization and factional enmity long predate the rise of the platforms, and that there are plenty of other factors involved, he believes that the tools of virality—Facebook’s Like and Share buttons, Twitter’s Retweet function—have algorithmically and irrevocably corroded public life. He has determined that a great historical discontinuity can be dated with some precision to the period between 2010 and 2014, when these features became widely available on phones….

After Haidt’s piece was published, the Google Doc—“Social Media and Political Dysfunction: A Collaborative Review”—was made available to the public. Comments piled up, and a new section was added, at the end, to include a miscellany of Twitter threads and Substack essays that appeared in response to Haidt’s interpretation of the evidence. Some colleagues and kibbitzers agreed with Haidt. But others, though they might have shared his basic intuition that something in our experience of social media was amiss, drew upon the same data set to reach less definitive conclusions, or even mildly contradictory ones. Even after the initial flurry of responses to Haidt’s article disappeared into social-media memory, the document, insofar as it captured the state of the social-media debate, remained a lively artifact.

Near the end of the collaborative project’s introduction, the authors warn, “We caution readers not to simply add up the number of studies on each side and declare one side the winner.” The document runs to more than a hundred and fifty pages, and for each question there are affirmative and dissenting studies, as well as some that indicate mixed results. According to one paper, “Political expressions on social media and the online forum were found to (a) reinforce the expressers’ partisan thought process and (b) harden their pre-existing political preferences,” but, according to another, which used data collected during the 2016 election, “Over the course of the campaign, we found media use and attitudes remained relatively stable. Our results also showed that Facebook news use was related to modest over-time spiral of depolarization. Furthermore, we found that people who use Facebook for news were more likely to view both pro- and counter-attitudinal news in each wave. Our results indicated that counter-attitudinal exposure increased over time, which resulted in depolarization.” If results like these seem incompatible, a perplexed reader is given recourse to a study that says, “Our findings indicate that political polarization on social media cannot be conceptualized as a unified phenomenon, as there are significant cross-platform differences.”…(More)”.

Impediment of Infodemic on Disaster Policy Efficacy: Insights from Location Big Data


Paper by Xiaobin Shen, Natasha Zhang Foutz, and Beibei Li: “Infodemics impede the efficacy of business and public policies, particularly in disastrous times when high-quality information is in the greatest demand. This research proposes a multi-faceted conceptual framework to characterize an infodemic and then empirically assesses its impact on the core mitigation policy of a latest prominent disaster, the COVID-19 pandemic. Analyzing a half million records of COVID-related news media and social media, as well as .2 billion records of location data, via a multitude of methodologies, including text mining and spatio-temporal analytics, we uncover a number of interesting findings. First, the volume of the COVID information incurs an inverted-U-shaped impact on individuals’ compliance with the lockdown policy. That is, a smaller volume encourages the policy compliance, whereas an overwhelming volume discourages compliance, revealing negative ramifications of excessive information about a disaster. Second, novel information boosts policy compliance, signifying the value of offering original and distinctive, instead of redundant, information to the public during a disaster. Third, misinformation exhibits a U-shaped influence unexplored by the literature, deterring policy compliance until a larger amount surfaces, diminishing informational value, escalating public uncertainty. Overall, these findings demonstrate the power of information technology, such as media analytics and location sensing, in disaster management. They also illuminate the significance of strategic information management during disasters and the imperative need for cohesive efforts across governments, media, technology platforms, and the general public to curb future infodemics…(More)”.