Can Social Media Rhetoric Incite Hate Incidents? Evidence from Trump’s “Chinese Virus” Tweets


Paper by Andy Cao, Jason M. Lindo & Jiee Zhong: “We will investigate whether Donald Trump’s “Chinese Virus” tweets contributed to the rise of anti-Asian incidents. We find that the number of incidents spiked following Trump’s initial “Chinese Virus” tweets and the subsequent dramatic rise in internet search activity for the phrase. Difference-in-differences and event-study analyses leveraging spatial variation indicate that this spike in anti-Asian incidents was significantly more pronounced in counties that supported Donald Trump in the 2016 presidential election relative to those that supported Hillary Clinton. We estimate that anti-Asian incidents spiked by 4000 percent in Trump-supporting counties, over and above the spike observed in Clinton-supporting counties…(More)”.

Is This the Beginning of the End of the Internet?


Article by Charlie Warzel: “…occasionally, something happens that is so blatantly and obviously misguided that trying to explain it rationally makes you sound ridiculous. Such is the case with the Fifth Circuit Court of Appeals’s recent ruling in NetChoice v. Paxton. Earlier this month, the court upheld a preposterous Texas law stating that online platforms with more than 50 million monthly active users in the United States no longer have First Amendment rights regarding their editorial decisions. Put another way, the law tells big social-media companies that they can’t moderate the content on their platforms. YouTube purging terrorist-recruitment videos? Illegal. Twitter removing a violent cell of neo-Nazis harassing people with death threats? Sorry, that’s censorship, according to Andy Oldham, a judge of the United States Court of Appeals and the former general counsel to Texas Governor Greg Abbott.

A state compelling social-media companies to host all user content without restrictions isn’t merely, as the First Amendment litigation lawyer Ken White put it on Twitter, “the most angrily incoherent First Amendment decision I think I’ve ever read.” It’s also the type of ruling that threatens to blow up the architecture of the internet. To understand why requires some expertise in First Amendment law and content-moderation policy, and a grounding in what makes the internet a truly transformational technology. So I called up some legal and tech-policy experts and asked them to explain the Fifth Circuit ruling—and its consequences—to me as if I were a precocious 5-year-old with a strange interest in jurisprudence…(More)”

Whataboutism


Essay by B.D. McClay: “Attention is finite, the record of how we spend it public, and it is easy enough to check if somebody who tweets every day about Ukraine has ever tweeted about Yemen. Many people are inclined to give somebody they trust a pass; behavior that might attract loud condemnation of a stranger might be ignored if done by a friend. Sometimes, such inconsistencies, added up, indicate that somebody is untrustworthy, that her commitments are insincere, and that there is something manipulative about her public persona. But most of the time, I would hazard, they indicate that people do not live their lives striving for perfect consistency….

The Internet, however, has only one currency, and that currency is attention. On the Internet, we endlessly raise awareness, we platform and deplatform, we signal-boost and call out, and we argue about where our attention should be directed, and how. What we pay attention to and the language in which we pay attention are the only realities worth considering, which is one reason why stories are so often framed by the idea that nobody is talking about a problem, when the problem is often quite endlessly talked about—just not solved. Why isn’t the media covering this story? is a common refrain that is just as often accompanied by a link to an article about the story, which is how the complainer learned about it in the first place.

Attention can be paid and registered in many forms, but you pay attention online by making it known that you are paying attention. Your own expenditure is worthless unless other people are paying attention to you. As they do in regard to the currency of the analog world, people feel as though they get to judge how other people pay attention. Even though most actions are undertaken with some idea of gaining attention, to do something out of a blatant desire to attract attention is gauche and discrediting. People whose job is to translate attention into real money—celebrities, “influencers,” and so on—are often left walking a thin and ridiculous line. They must draw attention to some larger event going on in the world lest they be judged selfish, but their attempts to do so mostly underscore that drawing attention to something means very little…(More)”.

Who Is Falling for Fake News?


Article by Angie Basiouny: “People who read fake news online aren’t doomed to fall into a deep echo chamber where the only sound they hear is their own ideology, according to a revealing new study from Wharton.

Surprisingly, readers who regularly browse fake news stories served up by social media algorithms are more likely to diversify their news diet by seeking out mainstream sources. These well-rounded news junkies make up more than 97% of online readers, compared with the scant 2.8% who consume online fake news exclusively.

“We find that these echo chambers that people worry about are very shallow. This idea that the internet is creating an echo chamber is just not holding out to be true,” said Senthil Veeraraghavan, a Wharton professor of operations, information and decisions.

Veeraraghavan is co-author of the paper, “Does Fake News Create Echo Chambers?” It was also written by Ken Moon, Wharton professor of operations, information and decisions, and Jiding Zhang, an assistant operations management professor at New York University Shanghai who earned her doctorate at Wharton.

The study, which examined the browsing activity of nearly 31,000 households during 2017, offers empirical evidence that goes against popular beliefs about echo chambers. While echo chambers certainly are dark and dangerous places, they aren’t metaphorical black holes that suck in every person who reads an article about, say, Obama birtherism theory or conspiracies about COVID-19 vaccines. The study found that households exposed to fake news actually increase their exposure to mainstream news by 9.1%.

“We were surprised, although we were very aware going in that there was much that we did not know,” Moon said. “One thing we wanted to see is how much fake news is out there. How do we figure out what’s fake and what’s not, and who is producing the fake news and why? The economic structure of that matters from a business perspective.”…(More)”

Digital Literacy Doesn’t Stop the Spread of Misinformation


Article by David Rand, and Nathaniel Sirlin: “There has been tremendous concern recently over misinformation on social media. It was a pervasive topic during the 2020 U.S. presidential election, continues to be an issue during the COVID-19 pandemic and plays an important part in Russian propaganda efforts in the war on Ukraine. This concern is plenty justified, as the consequences of believing false information are arguably shaping the future of nations and greatly affecting our individual and collective health.

One popular theory about why some people fall for misinformation they encounter online is that they lack digital literacy skills, a nebulous term that describes how a person navigates digital spaces. Someone lacking digital literacy skills, the thinking goes, may be more susceptible to believing—and sharing—false information. As a result, less digitally literate people may play a significant role in the spread of misinformation.

This argument makes intuitive sense. Yet very little research has actually investigated the link between digital literacy and susceptibility to believe false information. There’s even less understanding of the potential link between digital literacy and what people share on social media. As researchers who study the psychology of online misinformation, we wanted to explore these potential associations….

When we looked at the connection between digital literacy and the willingness to share false information with others through social media, however, the results were different. People who were more digitally literate were just as likely to say they’d share false articles as people who lacked digital literacy. Like the first finding, the (lack of) connection between digital literacy and sharing false news was not affected by political party affiliation or whether the topic was politics or the pandemic…(More)”

Meta launches Sphere, an AI knowledge tool based on open web content, used initially to verify citations on Wikipedia


Article by Ingrid Lunden: “Facebook may be infamous for helping to usher in the era of “fake news”, but it’s also tried to find a place for itself in the follow-up: the never-ending battle to combat it. In the latest development on that front, Facebook parent Meta today announced a new tool called Sphere, AI built around the concept of tapping the vast repository of information on the open web to provide a knowledge base for AI and other systems to work. Sphere’s first application, Meta says, is Wikipedia, where it’s being used in a production phase (not live entries) to automatically scan entries and identify when citations in its entries are strongly or weakly supported.

The research team has open sourced Sphere — which is currently based on 134 million public web pages. Here is how it works in action…(More)”.

Datafication of Public Opinion and the Public Sphere


Book by Slavko Splichal: “The book, anchored in stimulating debates about the Enlightenment ideas of publicness, analyses historical changes in the core phenomena of publicness: possibilities, conditions and obstacles to developing a public sphere in which the public reflexively creates, articulates and expresses public opinion. It is focused on the historical transformation from “public use of reason” through the identification of “public opinion” in opinion polls to contemporary opinion mining, in which the Enlightenment idea of public expression of opinion has been displaced by the technology of extracting opinions. It heralds a new critical impetus in theory and research of publicness at a time when critical social thought is sharply criticising and even abandoning the notion of the public sphere, much like the notion of public opinion decades ago, due to its predominantly administrative use…(More)”.

Social Noise: What Is It, and Why Should We Care?


Article by Tara Zimmerman: “As social media, online relationships, and perceived social expectations on platforms such as Facebook play a greater role in people’s lives, a new phenomenon has emerged: social noise. Social noise is the influence of personal and relational factors on information received, which can confuse, distort, or even change the intended message. Influenced by social noise, people are likely to moderate their response to information based on cues regarding what behavior is acceptable or desirable within their social network. This may be done consciously or unconsciously as individuals strive to present themselves in ways that increase their social capital. For example, this might be seen as liking or sharing information posted by a friend or family member as a show of support despite having no strong feelings toward the information itself. Similarly, someone might refrain from liking, sharing, or commenting on information they strongly agree with because they believe others in their social network would disapprove.

This study reveals that social media users’ awareness of observation by others does impact their information behavior. Efforts to craft a personal reputation, build or maintain relationships, pursue important commitments, and manage conflict all influence the observable information behavior of
social media users. As a result, observable social media information behavior may not be an accurate reflection of an individual’s true thoughts and beliefs. This is particularly interesting in light of the role social media plays in the spread of mis- and disinformation…(More)”.

How China uses search engines to spread propaganda


Blog by Jessica Brandt and Valerie Wirtschafter: “Users come to search engines seeking honest answers to their queries. On a wide range of issues—from personal health, to finance, to news—search engines are often the first stop for those looking to get information online. But as authoritarian states like China increasingly use online platforms to disseminate narratives aimed at weakening their democratic competitors, these search engines represent a crucial battleground in their information war with rivals. For Beijing, search engines represent a key—and underappreciated vector—to spread propaganda to audiences around the world.  

On a range of topics of geopolitical importance, Beijing has exploited search engine results to disseminate state-backed media that amplify the Chinese Communist Party’s propaganda. As we demonstrate in our recent report, published by the Brookings Institution in collaboration with the German Marshall Fund’s Alliance for Securing Democracy, users turning to search engines for information on Xinjiang, the site of the CCP’s egregious human rights abuses of the region’s Uyghur minority, or the origins of the coronavirus pandemic are surprisingly likely to encounter articles on these topics published by Chinese state-media outlets. By prominently surfacing this type of content, search engines may play a key role in Beijing’s effort to shape external perceptions, which makes it crucial that platforms—along with authoritative outlets that syndicate state-backed content without clear labeling—do more to address their role in spreading these narratives…(More)“.

The Truth in Fake News: How Disinformation Laws Are Reframing the Concepts of Truth and Accuracy on Digital Platforms


Paper by Paolo Cavaliere: “The European Union’s (EU) strategy to address the spread of disinformation, and most notably the Code of Practice on Disinformation and the forthcoming Digital Services Act, tasks digital platforms with a range of actions to minimise the distribution of issue-based and political adverts that are verifiably false or misleading. This article discusses the implications of the EU’s approach with a focus on its categorical approach, specifically what it means to conceptualise disinformation as a form of advertisement and by what standards digital platforms are expected to assess the truthful or misleading nature of the content they distribute because of this categorisation. The analysis will show how the emerging EU anti-disinformation framework marks a departure from the European Court of Human Rights’ consolidated standards of review for public interest and commercial speech and the tests utilised to assess their accuracy….(More)”.