Why People Are So Awful Online


Roxane Gay at the New York Times: “When I joined Twitter 14 years ago, I was living in Michigan’s Upper Peninsula, attending graduate school. I lived in a town of around 4,000 people, with few Black people or other people of color, not many queer people and not many writers. Online is where I found a community beyond my graduate school peers. I followed and met other emerging writers, many of whom remain my truest friends. I got to share opinions, join in on memes, celebrate people’s personal joys, process the news with others and partake in the collective effervescence of watching awards shows with thousands of strangers.

Something fundamental has changed since then. I don’t enjoy most social media anymore. I’ve felt this way for a while, but I’m loath to admit it.

Increasingly, I’ve felt that online engagement is fueled by the hopelessness many people feel when we consider the state of the world and the challenges we deal with in our day-to-day lives. Online spaces offer the hopeful fiction of a tangible cause and effect — an injustice answered by an immediate consequence. On Twitter, we can wield a small measure of power, avenge wrongs, punish villains, exalt the pure of heart….

Lately, I’ve been thinking that what drives so much of the anger and antagonism online is our helplessness offline. Online we want to be good, to do good, but despite these lofty moral aspirations, there is little generosity or patience, let alone human kindness. There is a desperate yearning for emotional safety. There is a desperate hope that if we all become perfect enough and demand the same perfection from others, there will be no more harm or suffering.

It is infuriating. It is also entirely understandable. Some days, as I am reading the news, I feel as if I am drowning. I think most of us do. At least online, we can use our voices and know they can be heard by someone.

It’s no wonder that we seek control and justice online. It’s no wonder that the tenor of online engagement has devolved so precipitously. It’s no wonder that some of us have grown weary of it….(More)”

A New Tool Shows How Google Results Vary Around the World


Article by Tom Simonite: “Google’s claim to “organize the world’s information and make it universally accessible and useful” has earned it an aura of objectivity. Its dominance in search, and the disappearance of most competitors, make its lists of links appear still more canonical. An experimental new interface for  Google Search aims to remove that mantle of neutrality.

Search Atlas makes it easy to see how Google offers different responses to the same query on versions of its search engine offered in different parts of the world. The research project reveals how Google’s service can reflect or amplify cultural differences or government preferences—such as whether Beijing’s Tiananmen Square should be seen first as a sunny tourist attraction or the site of a lethal military crackdown on protesters.

Divergent results like that show how the idea of search engines as neutral is a myth, says Rodrigo Ochigame, a PhD student in science, technology, and society at MIT and cocreator of Search Atlas. “Any attempt to quantify relevance necessarily encodes moral and political priorities,” Ochigame says.

Ochigame built Search Atlas with Katherine Ye, a computer science PhD student at Carnegie Mellon University and a research fellow at the nonprofit Center for Arts, Design, and Social Research.

Just like Google’s homepage, the main feature of Search Atlas is a blank box. But instead of returning a single column of results, the site displays three lists of links, from different geographic versions of Google Search selected from the more than 100 the company offers. Search Atlas automatically translates a query to the default languages of each localized edition using Google Translate.

Ochigame and Ye say the design reveals “information borders” created by the way Google’s search technology ranks web pages, presenting different slices of reality to people in different locations or using different languages.

When they used their tool to do an image search on “Tiananmen Square,” the UK and Singaporean versions of Google returned images of tanks and soldiers quashing the 1989 student protests. When the same query was sent to a version of Google tuned for searches from China, which can be accessed by circumventing the country’s Great Firewall, the results showed recent, sunny images of the square, smattered with tourists.

Google’s search engine has been blocked in China since 2010, when the company said it would stop censoring topics the government deemed sensitive, such as the Tiananmen massacre. Search Atlas suggests that the China edition of the company’s search engine can reflect the Chinese government’s preferences all the same. That pattern could result in part from how the corpus of web pages from any language or region would reflect cultural priorities and pressures….(More)”

Search Atlas graph showing different search results
An experimental interface for Google Search found that it offered very different views of Beijing’s Tiananmen Square to searchers from the UK (left), Singapore (center), and China. COURTESY OF SEARCH ATLAS

Media Is Us: Understanding Communication and Moving beyond Blame


Book by Elizaveta Friesem: “Media is usually seen as a feature of the modern world enabled by the latest technologies. Scholars, educators, parents, and politicians often talk about media as something people should be wary of due to its potential negative impact on their lives. But do we really understand what media is?

Elizaveta Friesem argues that instead of being worried about media or blaming it for what’s going wrong in society, we should become curious about uniquely human ways we communicate with each other. Media Is Us proposes five key principles of communication that are relevant both for the modern media and for people’s age-old ways of making sense of the world.

In order to understand problems of the contemporary society revealed and amplified by the latest technologies, we will have to ask difficult questions about ourselves. Where do our truths and facts come from? How can we know who is to blame for flaws of the social system? What can we change about our own everyday actions to make the world a better place? To answer these questions we will need to rethink not only the term “media” but also the concept of power. The change of perspective proposed by the book is intended to help the reader become more self-aware and also empathic towards those who choose different truths.

Concluding with practical steps to build media literacy through the ACE model—from Awareness to Collaboration through Empathy—this timely book is essential for students and scholars, as well as anyone who would use the new understanding of media to decrease the current levels of cultural polarization….(More)”.

Do journalists “hide behind” sources when they use numbers in the news?


Article by Mark Coddington and Seth Lewis: “Numerical information is a central piece of journalism. Just look at how often stories rely on quantitative data — from Covid case numbers to public opinion polling to economics statistics — as their evidentiary backbone. The rise of data journalism, with its slick visualizations and interactives, has reinforced the role and influence of numbers in the news.

But, as B.T. Lawson reminds us in a new article in Journalism Practice, though we have plenty of research on this decade-long boom in data journalism, much of the research “overstates the significance of the data journalist within the news media. Yes, data journalists are now a mainstay of most news organizations, but they are not the only journalists using numbers. Far from it.”

Indeed, in contrast to the 1960s and 70s era of computer-assisted reporting, when a small minority of specialized reporters worked with data but most reporters did not, nowadays virtually all journalists are expected to engage with numbers as part of their work. Which brings up a potential problem: Some research suggests that journalists rarely challenge the numbers they receive, leading them to accept and reproduce the discourse around those numbers from their sources.

To get a clearer picture of how journalists draw on numbers and narratives about them, Lawson examined reporters’ use of numbers in their coverage of seven humanitarian crises in 2017. The author did this in two ways: first through a content analysis of 978 news articles from U.K. news media (to look for some direct or indirect form of challenging statistics, cross-verifying one claim relative to another, etc.), and then through interviews with 16 journalists involved in at least one of those stories, to gain additional insights into the process of receiving and reporting on numbers.

The title of the resulting article — “Hiding Behind Databases, Institutions and Actors: How Journalists Use Statistics in Reporting Humanitarian Crises” — indicates something about one of its findings: namely, that journalists covering humanitarian crises rely heavily on numbers, often provided by NGOs or the UN, but they seldom verify the numbers they use, mainly because they see it as outside their role to do such work and because they “hide behind” the perceived credibility of their sources….(More)”

Diverse Sources Database


About: “The Diverse Sources Database is NPR’s resource for journalists who believe in the value of diversity and share our goal to make public radio look and sound like America.

Originally called Source of the Week, the database launched in 2013 as a way help journalists at NPR and member stations expand the racial/ethnic diversity of the experts they tap for stories…(More)”.

‘Belonging Is Stronger Than Facts’: The Age of Misinformation



Max Fisher at the New York Times: “There’s a decent chance you’ve had at least one of these rumors, all false, relayed to you as fact recently: that President Biden plans to force Americans to eat less meat; that Virginia is eliminating advanced math in schools to advance racial equality; and that border officials are mass-purchasing copies of Vice President Kamala Harris’s book to hand out to refugee children.

All were amplified by partisan actors. But you’re just as likely, if not more so, to have heard it relayed from someone you know. And you may have noticed that these cycles of falsehood-fueled outrage keep recurring.

We are in an era of endemic misinformation — and outright disinformation. Plenty of bad actors are helping the trend along. But the real drivers, some experts believe, are social and psychological forces that make people prone to sharing and believing misinformation in the first place. And those forces are on the rise.

“Why are misperceptions about contentious issues in politics and science seemingly so persistent and difficult to correct?” Brendan Nyhan, a Dartmouth College political scientist, posed in a new paper in Proceedings of the National Academy of Sciences.

It’s not for want of good information, which is ubiquitous. Exposure to good information does not reliably instill accurate beliefs anyway. Rather, Dr. Nyhan writes, a growing body of evidence suggests that the ultimate culprits are “cognitive and memory limitations, directional motivations to defend or support some group identity or existing belief, and messages from other people and political elites.”

Put more simply, people become more prone to misinformation when three things happen. First, and perhaps most important, is when conditions in society make people feel a greater need for what social scientists call ingrouping — a belief that their social identity is a source of strength and superiority, and that other groups can be blamed for their problems….(More)”.

Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis


A CDT Research report, entitled "Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis".
CDT Research report, entitled “Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis”.

Report by Dhanaraj Thakur and  Emma Llansó: “The ever-increasing amount of user-generated content online has led, in recent years, to an expansion in research and investment in automated content analysis tools. Scrutiny of automated content analysis has accelerated during the COVID-19 pandemic, as social networking services have placed a greater reliance on these tools due to concerns about health risks to their moderation staff from in-person work. At the same time, there are important policy debates around the world about how to improve content moderation while protecting free expression and privacy. In order to advance these debates, we need to understand the potential role of automated content analysis tools.

This paper explains the capabilities and limitations of tools for analyzing online multimedia content and highlights the potential risks of using these tools at scale without accounting for their limitations. It focuses on two main categories of tools: matching models and computer prediction models. Matching models include cryptographic and perceptual hashing, which compare user-generated content with existing and known content. Predictive models (including computer vision and computer audition) are machine learning techniques that aim to identify characteristics of new or previously unknown content….(More)”.

Side-Stepping Safeguards, Data Journalists Are Doing Science Now


Article by Irineo Cabreros: “News stories are increasingly told through data. Witness the Covid-19 time series that decorate the homepages of every major news outlet; the red and blue heat maps of polling predictions that dominate the runup to elections; the splashy, interactive plots that dance across the screen.

As a statistician who handles data for a living, I welcome this change. News now speaks my favorite language, and the general public is developing a healthy appetite for data, too.

But many major news outlets are no longer just visualizing data, they are analyzing it in ever more sophisticated ways. For example, at the height of the second wave of Covid-19 cases in the United States, The New York Times ran a piece declaring that surging case numbers were not attributable to increased testing rates, despite President Trump’s claims to the contrary. The thrust of The Times’ argument was summarized by a series of plots that showed the actual rise in Covid-19 cases far outpacing what would be expected from increased testing alone. These weren’t simple visualizations; they involved assumptions and mathematical computations, and they provided the cornerstone for the article’s conclusion. The plots themselves weren’t sourced from an academic study (although the author on the byline of the piece is a computer science Ph.D. student); they were produced through “an analysis by The New York Times.”

The Times article was by no means an anomaly. News outlets have asserted, on the basis of in-house data analyses, that Covid-19 has killed nearly half a million more people than official records report; that Black and minority populations are overrepresented in the Covid-19 death toll; and that social distancing will usually outperform attempted quarantine. That last item, produced by The Washington Post and buoyed by in-house computer simulations, was the most read article in the history of the publication’s website, according to Washington Post media reporter Paul Farhi.

In my mind, a fine line has been crossed. Gone are the days when science journalism was like sports journalism, where the action was watched from the press box and simply conveyed. News outlets have stepped onto the field. They are doing the science themselves….(More)”.

‘Belonging Is Stronger Than Facts’: The Age of Misinformation


Max Fisher at the New York Times: “There’s a decent chance you’ve had at least one of these rumors, all false, relayed to you as fact recently: that President Biden plans to force Americans to eat less meat; that Virginia is eliminating advanced math in schools to advance racial equality; and that border officials are mass-purchasing copies of Vice President Kamala Harris’s book to hand out to refugee children.

All were amplified by partisan actors. But you’re just as likely, if not more so, to have heard it relayed from someone you know. And you may have noticed that these cycles of falsehood-fueled outrage keep recurring.

We are in an era of endemic misinformation — and outright disinformation. Plenty of bad actors are helping the trend along. But the real drivers, some experts believe, are social and psychological forces that make people prone to sharing and believing misinformation in the first place. And those forces are on the rise.

“Why are misperceptions about contentious issues in politics and science seemingly so persistent and difficult to correct?” Brendan Nyhan, a Dartmouth College political scientist, posed in a new paper in Proceedings of the National Academy of Sciences.

It’s not for want of good information, which is ubiquitous. Exposure to good information does not reliably instill accurate beliefs anyway. Rather, Dr. Nyhan writes, a growing body of evidence suggests that the ultimate culprits are “cognitive and memory limitations, directional motivations to defend or support some group identity or existing belief, and messages from other people and political elites.”

Put more simply, people become more prone to misinformation when three things happen. First, and perhaps most important, is when conditions in society make people feel a greater need for what social scientists call ingrouping — a belief that their social identity is a source of strength and superiority, and that other groups can be blamed for their problems.

As much as we like to think of ourselves as rational beings who put truth-seeking above all else, we are social animals wired for survival. In times of perceived conflict or social change, we seek security in groups. And that makes us eager to consume information, true or not, that lets us see the world as a conflict putting our righteous ingroup against a nefarious outgroup….(More)”.

Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation


Paper by Zach Bastick: “A growing literature is emerging on the believability and spread of disinformation, such as fake news, over social networks. However, little is known about the degree to which malicious actors can use social media to covertly affect behavior with disinformation. A lab-based randomized controlled experiment was conducted with 233 undergraduate students to investigate the behavioral effects of fake news. It was found that even short (under 5-min) exposure to fake news was able to significantly modify the unconscious behavior of individuals. This paper provides initial evidence that fake news can be used to covertly modify behavior, it argues that current approaches to mitigating fake news, and disinformation in general, are insufficient to protect social media users from this threat, and it highlights the implications of this for democracy. It raises the need for an urgent cross-sectoral effort to investigate, protect against, and mitigate the risks of covert, widespread and decentralized behavior modification over online social networks….(More)”