Custodians of the Internet


Book by Tarleton Gillespie on “Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media…Most users want their Twitter feed, Facebook page, and YouTube comments to be free of harassment and porn. Whether faced with “fake news” or livestreamed violence, “content moderators”—who censor or promote user-posted content—have never been more important. This is especially true when the tools that social media platforms use to curb trolling, ban hate speech, and censor pornography can also silence the speech you need to hear.

In this revealing and nuanced exploration, award-winning sociologist and cultural observer Tarleton Gillespie provides an overview of current social media practices and explains the underlying rationales for how, when, and why these policies are enforced. In doing so, Gillespie highlights that content moderation receives too little public scrutiny even as it is shapes social norms and creates consequences for public discourse, cultural production, and the fabric of society. Based on interviews with content moderators, creators, and consumers, this accessible, timely book is a must-read for anyone who’s ever clicked “like” or “retweet.”…(More)”.

Off-Label: How tech platforms decide what counts as journalism


Essay by Emily Bell: “…But putting a stop to militarized fascist movements—and preventing another attack on a government building—will ultimately require more than content removal. Technology companies need to fundamentally recalibrate how they categorize, promote, and circulate everything under their banner, particularly news. They have to acknowledge their editorial responsibility.

The extraordinary power of tech platforms to decide what material is worth seeing—under the loosest possible definition of who counts as a “journalist”—has always been a source of tension with news publishers. These companies have now been put in the position of being held accountable for developing an information ecosystem based in fact. It’s unclear how much they are prepared to do, if they will ever really invest in pro-truth mechanisms on a global scale. But it is clear that, after the Capitol riot, there’s no going back to the way things used to be.

Between 2016 and 2020, Facebook, Twitter, and Google made dozens of announcements promising to increase the exposure of high-quality news and get rid of harmful misinformation. They claimed to be investing in content moderation and fact-checking; they assured us that they were creating helpful products like the Facebook News Tab. Yet the result of all these changes has been hard to examine, since the data is both scarce and incomplete. Gordon Crovitz—a former publisher of the Wall Street Journal and a cofounder of NewsGuard, which applies ratings to news sources based on their credibility—has been frustrated by the lack of transparency: “In Google, YouTube, Facebook, and Twitter we have institutions that we know all give quality ratings to news sources in different ways,” he told me. “But if you are a news organization and you want to know how you are rated, you can ask them how these systems are constructed, and they won’t tell you.” Consider the mystery behind blue-check certification on Twitter, or the absurdly wide scope of the “Media/News” category on Facebook. “The issue comes down to a fundamental failure to understand the core concepts of journalism,” Crovitz said.

Still, researchers have managed to put together a general picture of how technology companies handle various news sources. According to Jennifer Grygiel, an assistant professor of communications at Syracuse University, “we know that there is a taxonomy within these companies, because we have seen them dial up and dial down the exposure of quality news outlets.” Internally, platforms rank journalists and outlets and make certain designations, which are then used to develop algorithms for personalized news recommendations and news products….(More)”

It’s hard to be a moral person. Technology is making it harder.


Article by Sigal Samuel: “The idea of moral attention goes back at least as far as ancient Greece, where the Stoics wrote about the practice of attention (prosoché) as the cornerstone of a good spiritual life. In modern Western thought, though, ethicists didn’t focus too much on attention until a band of female philosophers came along, starting with Simone Weil.

Weil, an early 20th-century French philosopher and Christian mystic, wrote that “attention is the rarest and purest form of generosity.” She believed that to be able to properly pay attention to someone else — to become fully receptive to their situation in all its complexity — you need to first get your own self out of the way. She called this process “decreation,” and explained: “Attention consists of suspending our thought, leaving it detached, empty … ready to receive in its naked truth the object that is to penetrate it.”

Weil argued that plain old attention — the kind you use when reading novels, say, or birdwatching — is a precondition for moral attention, which is a precondition for empathy, which is a precondition for ethical action.

Later philosophers, like Iris Murdoch and Martha Nussbaum, picked up and developed Weil’s ideas. They garbed them in the language of Western philosophy; Murdoch, for example, appeals to Plato as she writes about the need for “unselfing.” But this central idea of “unselfing” or “decreation” is perhaps most reminiscent of Eastern traditions like Buddhism, which has long emphasized the importance of relinquishing our ego and training our attention so we can perceive and respond to others’ needs. It offers tools like mindfulness meditation for doing just that…(More)”

Why People Are So Awful Online


Roxane Gay at the New York Times: “When I joined Twitter 14 years ago, I was living in Michigan’s Upper Peninsula, attending graduate school. I lived in a town of around 4,000 people, with few Black people or other people of color, not many queer people and not many writers. Online is where I found a community beyond my graduate school peers. I followed and met other emerging writers, many of whom remain my truest friends. I got to share opinions, join in on memes, celebrate people’s personal joys, process the news with others and partake in the collective effervescence of watching awards shows with thousands of strangers.

Something fundamental has changed since then. I don’t enjoy most social media anymore. I’ve felt this way for a while, but I’m loath to admit it.

Increasingly, I’ve felt that online engagement is fueled by the hopelessness many people feel when we consider the state of the world and the challenges we deal with in our day-to-day lives. Online spaces offer the hopeful fiction of a tangible cause and effect — an injustice answered by an immediate consequence. On Twitter, we can wield a small measure of power, avenge wrongs, punish villains, exalt the pure of heart….

Lately, I’ve been thinking that what drives so much of the anger and antagonism online is our helplessness offline. Online we want to be good, to do good, but despite these lofty moral aspirations, there is little generosity or patience, let alone human kindness. There is a desperate yearning for emotional safety. There is a desperate hope that if we all become perfect enough and demand the same perfection from others, there will be no more harm or suffering.

It is infuriating. It is also entirely understandable. Some days, as I am reading the news, I feel as if I am drowning. I think most of us do. At least online, we can use our voices and know they can be heard by someone.

It’s no wonder that we seek control and justice online. It’s no wonder that the tenor of online engagement has devolved so precipitously. It’s no wonder that some of us have grown weary of it….(More)”

A New Tool Shows How Google Results Vary Around the World


Article by Tom Simonite: “Google’s claim to “organize the world’s information and make it universally accessible and useful” has earned it an aura of objectivity. Its dominance in search, and the disappearance of most competitors, make its lists of links appear still more canonical. An experimental new interface for  Google Search aims to remove that mantle of neutrality.

Search Atlas makes it easy to see how Google offers different responses to the same query on versions of its search engine offered in different parts of the world. The research project reveals how Google’s service can reflect or amplify cultural differences or government preferences—such as whether Beijing’s Tiananmen Square should be seen first as a sunny tourist attraction or the site of a lethal military crackdown on protesters.

Divergent results like that show how the idea of search engines as neutral is a myth, says Rodrigo Ochigame, a PhD student in science, technology, and society at MIT and cocreator of Search Atlas. “Any attempt to quantify relevance necessarily encodes moral and political priorities,” Ochigame says.

Ochigame built Search Atlas with Katherine Ye, a computer science PhD student at Carnegie Mellon University and a research fellow at the nonprofit Center for Arts, Design, and Social Research.

Just like Google’s homepage, the main feature of Search Atlas is a blank box. But instead of returning a single column of results, the site displays three lists of links, from different geographic versions of Google Search selected from the more than 100 the company offers. Search Atlas automatically translates a query to the default languages of each localized edition using Google Translate.

Ochigame and Ye say the design reveals “information borders” created by the way Google’s search technology ranks web pages, presenting different slices of reality to people in different locations or using different languages.

When they used their tool to do an image search on “Tiananmen Square,” the UK and Singaporean versions of Google returned images of tanks and soldiers quashing the 1989 student protests. When the same query was sent to a version of Google tuned for searches from China, which can be accessed by circumventing the country’s Great Firewall, the results showed recent, sunny images of the square, smattered with tourists.

Google’s search engine has been blocked in China since 2010, when the company said it would stop censoring topics the government deemed sensitive, such as the Tiananmen massacre. Search Atlas suggests that the China edition of the company’s search engine can reflect the Chinese government’s preferences all the same. That pattern could result in part from how the corpus of web pages from any language or region would reflect cultural priorities and pressures….(More)”

Search Atlas graph showing different search results
An experimental interface for Google Search found that it offered very different views of Beijing’s Tiananmen Square to searchers from the UK (left), Singapore (center), and China. COURTESY OF SEARCH ATLAS

Media Is Us: Understanding Communication and Moving beyond Blame


Book by Elizaveta Friesem: “Media is usually seen as a feature of the modern world enabled by the latest technologies. Scholars, educators, parents, and politicians often talk about media as something people should be wary of due to its potential negative impact on their lives. But do we really understand what media is?

Elizaveta Friesem argues that instead of being worried about media or blaming it for what’s going wrong in society, we should become curious about uniquely human ways we communicate with each other. Media Is Us proposes five key principles of communication that are relevant both for the modern media and for people’s age-old ways of making sense of the world.

In order to understand problems of the contemporary society revealed and amplified by the latest technologies, we will have to ask difficult questions about ourselves. Where do our truths and facts come from? How can we know who is to blame for flaws of the social system? What can we change about our own everyday actions to make the world a better place? To answer these questions we will need to rethink not only the term “media” but also the concept of power. The change of perspective proposed by the book is intended to help the reader become more self-aware and also empathic towards those who choose different truths.

Concluding with practical steps to build media literacy through the ACE model—from Awareness to Collaboration through Empathy—this timely book is essential for students and scholars, as well as anyone who would use the new understanding of media to decrease the current levels of cultural polarization….(More)”.

Do journalists “hide behind” sources when they use numbers in the news?


Article by Mark Coddington and Seth Lewis: “Numerical information is a central piece of journalism. Just look at how often stories rely on quantitative data — from Covid case numbers to public opinion polling to economics statistics — as their evidentiary backbone. The rise of data journalism, with its slick visualizations and interactives, has reinforced the role and influence of numbers in the news.

But, as B.T. Lawson reminds us in a new article in Journalism Practice, though we have plenty of research on this decade-long boom in data journalism, much of the research “overstates the significance of the data journalist within the news media. Yes, data journalists are now a mainstay of most news organizations, but they are not the only journalists using numbers. Far from it.”

Indeed, in contrast to the 1960s and 70s era of computer-assisted reporting, when a small minority of specialized reporters worked with data but most reporters did not, nowadays virtually all journalists are expected to engage with numbers as part of their work. Which brings up a potential problem: Some research suggests that journalists rarely challenge the numbers they receive, leading them to accept and reproduce the discourse around those numbers from their sources.

To get a clearer picture of how journalists draw on numbers and narratives about them, Lawson examined reporters’ use of numbers in their coverage of seven humanitarian crises in 2017. The author did this in two ways: first through a content analysis of 978 news articles from U.K. news media (to look for some direct or indirect form of challenging statistics, cross-verifying one claim relative to another, etc.), and then through interviews with 16 journalists involved in at least one of those stories, to gain additional insights into the process of receiving and reporting on numbers.

The title of the resulting article — “Hiding Behind Databases, Institutions and Actors: How Journalists Use Statistics in Reporting Humanitarian Crises” — indicates something about one of its findings: namely, that journalists covering humanitarian crises rely heavily on numbers, often provided by NGOs or the UN, but they seldom verify the numbers they use, mainly because they see it as outside their role to do such work and because they “hide behind” the perceived credibility of their sources….(More)”

Diverse Sources Database


About: “The Diverse Sources Database is NPR’s resource for journalists who believe in the value of diversity and share our goal to make public radio look and sound like America.

Originally called Source of the Week, the database launched in 2013 as a way help journalists at NPR and member stations expand the racial/ethnic diversity of the experts they tap for stories…(More)”.

‘Belonging Is Stronger Than Facts’: The Age of Misinformation



Max Fisher at the New York Times: “There’s a decent chance you’ve had at least one of these rumors, all false, relayed to you as fact recently: that President Biden plans to force Americans to eat less meat; that Virginia is eliminating advanced math in schools to advance racial equality; and that border officials are mass-purchasing copies of Vice President Kamala Harris’s book to hand out to refugee children.

All were amplified by partisan actors. But you’re just as likely, if not more so, to have heard it relayed from someone you know. And you may have noticed that these cycles of falsehood-fueled outrage keep recurring.

We are in an era of endemic misinformation — and outright disinformation. Plenty of bad actors are helping the trend along. But the real drivers, some experts believe, are social and psychological forces that make people prone to sharing and believing misinformation in the first place. And those forces are on the rise.

“Why are misperceptions about contentious issues in politics and science seemingly so persistent and difficult to correct?” Brendan Nyhan, a Dartmouth College political scientist, posed in a new paper in Proceedings of the National Academy of Sciences.

It’s not for want of good information, which is ubiquitous. Exposure to good information does not reliably instill accurate beliefs anyway. Rather, Dr. Nyhan writes, a growing body of evidence suggests that the ultimate culprits are “cognitive and memory limitations, directional motivations to defend or support some group identity or existing belief, and messages from other people and political elites.”

Put more simply, people become more prone to misinformation when three things happen. First, and perhaps most important, is when conditions in society make people feel a greater need for what social scientists call ingrouping — a belief that their social identity is a source of strength and superiority, and that other groups can be blamed for their problems….(More)”.

Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis


A CDT Research report, entitled "Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis".
CDT Research report, entitled “Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis”.

Report by Dhanaraj Thakur and  Emma Llansó: “The ever-increasing amount of user-generated content online has led, in recent years, to an expansion in research and investment in automated content analysis tools. Scrutiny of automated content analysis has accelerated during the COVID-19 pandemic, as social networking services have placed a greater reliance on these tools due to concerns about health risks to their moderation staff from in-person work. At the same time, there are important policy debates around the world about how to improve content moderation while protecting free expression and privacy. In order to advance these debates, we need to understand the potential role of automated content analysis tools.

This paper explains the capabilities and limitations of tools for analyzing online multimedia content and highlights the potential risks of using these tools at scale without accounting for their limitations. It focuses on two main categories of tools: matching models and computer prediction models. Matching models include cryptographic and perceptual hashing, which compare user-generated content with existing and known content. Predictive models (including computer vision and computer audition) are machine learning techniques that aim to identify characteristics of new or previously unknown content….(More)”.