Merging the ‘Social’ and the ‘Public’: How Social Media Platforms Could Be a New Public Forum


Paper by Amélie Pia Heldt: “When Facebook and other social media sites announced in August 2018 they would ban extremist speakers such as conspiracy theorist Alex Jones for violating their rules against hate speech, reactions were strong. Either they would criticize that such measures were only a drop in the bucket with regards to toxic and harmful speech online, or they would despise Facebook & Co. for penalizing only right-wing speakers, hence censoring political opinions and joining some type of anti-conservative media conglomerate. This anecdote foremost begged the question: Should someone like Alex Jones be excluded from Facebook? And the question “should” includes the one of “may Facebook exclude users for publishing political opinions?”.

As social media platforms take up more and more space in our daily lives, enabling not only individual and mass communication, but also offering payment and other services, there is still a need for a common understanding with regards to the social and communicative space they create in cyberspace. By common I mean on a global scale since this is the way most social media platforms operate or aim for (see Facebook’s mission statement: “bring the world closer together”). While in social science a new digital sphere was proclaimed and social media platforms can be categorized as “personal publics”, there is no such denomination in legal scholarship that is globally agreed upon. Public space can be defined as a free room between the state and society, as a space for freedom. Generally, it is where individuals are protected by their fundamental rights while operating in the public sphere. However, terms like forum, space, and sphere may not be used as synonyms in this discussion. Under the First Amendment, the public forum doctrine mainly serves the purposes of democracy and truth and could be perpetuated in communication services that promote direct dialogue between the state and citizens. But where and by whom is the public forum guaranteed in cyberspace? The notion of the public space in cyberspace is central and it constantly evolves as platforms become broader in their services, hence it needs to be examined more closely. When looking at social media platforms we need to take into account how they moderate speech and subsequently how they influence social processes. If representative democracies are built on the grounds of deliberation, it is essential to safeguard the room for public discourse to actually happen. Are constitutional concepts for the analog space transferable into the digital? Should private actors such as social media platforms be bound by freedom of speech without being considered state actors? And, accordingly, create a new type of public forum?

The goal of this article is to provide answers to the questions mentioned….(More)”.

Information Wars: How We Lost the Global Battle Against Disinformation and What We Can Do About It


Book by Richard Stengel: “Disinformation is as old as humanity. When Satan told Eve nothing would happen if she bit the apple, that was disinformation. But the rise of social media has made disinformation even more pervasive and pernicious in our current era. In a disturbing turn of events, governments are increasingly using disinformation to create their own false narratives, and democracies are proving not to be very good at fighting it.

During the final three years of the Obama administration, Richard Stengel, the former editor of Time and an Under Secretary of State, was on the front lines of this new global information war. At the time, he was the single person in government tasked with unpacking, disproving, and combating both ISIS’s messaging and Russian disinformation. Then, in 2016, as the presidential election unfolded, Stengel watched as Donald Trump used disinformation himself, weaponizing the grievances of Americans who felt left out by modernism. In fact, Stengel quickly came to see how all three players had used the same playbook: ISIS sought to make Islam great again; Putin tried to make Russia great again; and we all know about Trump.

In a narrative that is by turns dramatic and eye-opening, Information Wars walks readers through of this often frustrating battle. Stengel moves through Russia and Ukraine, Saudi Arabia and Iraq, and introduces characters from Putin to Hillary Clinton, John Kerry and Mohamed bin Salman to show how disinformation is impacting our global society. He illustrates how ISIS terrorized the world using social media, and how the Russians launched a tsunami of disinformation around the annexation of Crimea – a scheme that became the model for their interference with the 2016 presidential election. An urgent book for our times, Information Wars stresses that we must find a way to combat this ever growing threat to democracy….(More)”.

Democratic Transparency in the Platform Society


Chapter by Robert Gorwa and Timothy Garton Ash: “Following an host of major scandals, transparency has emerged in recent years as one of the leading accountability mechanisms through which the companies operating global platforms for user-generated content have attempted to regain the trust of the public, politicians, and regulatory authorities. Ranging from Facebook’s efforts to partner with academics and create a reputable mechanism for third party data access and independent research to the expanded advertising disclosure tools being built for elections around the world, transparency is playing a major role in current governance debates around free expression, social media, and democracy.

This article thus seeks to (a) contextualize the recent implementation of transparency as enacted by platform companies with an overview of the ample relevant literature on digital transparency in both theory and practice; (b) consider the potential positive governance impacts of transparency as a form of accountability in the current political moment; and (c) reflect upon the potential shortfalls of transparency that should be considered by legislators, academics, and funding bodies weighing the relative benefits of policy or research dealing with transparency in this area…(More)”.

We Need a PBS for Social Media


Mark Coatney at the New York Times: “Social media is an opportunity wrapped in a problem. YouTube spreads propaganda and is toxic to children. Twitter spreads propaganda and is toxic to racial relationsFacebook spreads propaganda and is toxic to democracy itself.

Such problems aren’t surprising when you consider that all these companies operate on the same basic model: Create a product that maximizes the attention you can command from a person, collect as much data as you can about that person, and sell it.

Proposed solutions like breaking up companies and imposing regulation have been met with resistance: The platforms, understandably, worry that their profits might be reduced from staggering to merely amazing. And this may not be the best course of action anyway.

What if the problem is something that can’t be solved by existing for-profit media platforms? Maybe the answer to fixing social media isn’t trying to change companies with business models built around products that hijack our attention, and instead work to create a less toxic alternative.

Nonprofit public media is part of the answer. More than 50 years ago, President Lyndon Johnson signed the Public Broadcasting Act, committing federal funds to create public television and radio that would “be responsive to the interests of people.”

It isn’t a big leap to expand “public media” to include not just television and radio but also social media. In 2019, the definition of “media” is considerably larger than it was in 1967. Commentary on Twitter, memes on Instagram and performances on TikTok are all as much a part of the media landscape today as newspapers and television news.

Public media came out of a recognition that the broadcasting spectrum is a finite resource. TV broadcasters given licenses to use the spectrum were expected to provide programming like news and educational shows in return. But that was not enough. To make sure that some of that finite resource would always be used in the public interest, Congress established public media.

Today, the limited resource isn’t the spectrum — it’s our attention….(More)”.

Big Data, Political Campaigning and the Law


Book edited by Normann Witzleb, Moira Paterson, and Janice Richardson on “Democracy and Privacy in the Age of Micro-Targeting”…: “In this multidisciplinary book, experts from around the globe examine how data-driven political campaigning works, what challenges it poses for personal privacy and democracy, and how emerging practices should be regulated.

The rise of big data analytics in the political process has triggered official investigations in many countries around the world, and become the subject of broad and intense debate. Political parties increasingly rely on data analytics to profile the electorate and to target specific voter groups with individualised messages based on their demographic attributes. Political micro-targeting has become a major factor in modern campaigning, because of its potential to influence opinions, to mobilise supporters and to get out votes. The book explores the legal, philosophical and political dimensions of big data analytics in the electoral process. It demonstrates that the unregulated use of big personal data for political purposes not only infringes voters’ privacy rights, but also has the potential to jeopardise the future of the democratic process, and proposes reforms to address the key regulatory and ethical questions arising from the mining, use and storage of massive amounts of voter data.

Providing an interdisciplinary assessment of the use and regulation of big data in the political process, this book will appeal to scholars from law, political science, political philosophy, and media studies, policy makers and anyone who cares about democracy in the age of data-driven political campaigning….(More)”.

The Upside of Deep Fakes


Paper by Jessica M. Silbey and Woodrow Hartzog: “It’s bad. We know. The dawn of “deep fakes” — convincing videos and images of people doing things they never did or said — puts us all in jeopardy in several different ways. Professors Bobby Chesney and Danielle Citron have noted that now “false claims — even preposterous ones — can be peddled with unprecedented success today thanks to a combination of social media ubiquity and virality, cognitive biases, filter bubbles, and group polarization.” The scholars identify a host of harms from deep fakes, ranging from people being exploited, extorted, and sabotaged, to societal harms like the erosion of democratic discourse and trust in social institutions, undermining public safety, national security, journalism, and diplomacy, deepening social divisions, and manipulation of elections. But it might not be all bad. Even beyond purported beneficial uses of deep-fake technology for education, art, and science, the looming deep-fake disaster might have a silver lining. Hear us out. We think deep fakes have an upside.

Crucial to our argument is the idea that deep fakes don’t create new problems so much as make existing problems worse. Cracks in systems, frameworks, strategies, and institutions that have been leaking for years now threaten to spring open. Journalism, education, individual rights, democratic systems, and voting protocols have long been vulnerable. Deep fakes might just be the straw that breaks them. And therein lies opportunity for repair. Below we briefly address some deep problems and how finally addressing them may also neutralize the destructive force of deep fakes. We only describe three cultural institutions – education, journalism, and representative democracy — with deep problems that could be strengthened as a response to deep fakes for greater societal gains. But we encourage readers to think up more. We have a hunch that once we harness the upside of deep fakes, we may unlock creative solutions to other sticky social and political problems…(More)”.

Computational Communication Science


Introduction to Special Issue of the International Journal of Communication:”Over the past two decades, processes of digitalization and mediatization have shaped the communication landscape and have had a strong impact on various facets of communication. The digitalization of communication results in completely new forms of digital traces that make communication processes observable in new and unprecedented ways. Although many scholars in the social sciences acknowledge the chances and requirements of the digital revolution in communication, they are also facing fundamental challenges in implementing successful research programs, strategies, and designs that are based on computational methods and “big data.” This Special Section aims at bringing together seminal perspectives on challenges and chances of computational communication science (CCS). In this introduction, we highlight the impulses provided by the research presented in the Special Section, discuss the most pressing challenges in the context of CCS, and sketch a potential roadmap for future research in this field….(More)”.

Could footnotes be the key to winning the disinformation wars?


Karin Wulf at the Washington Post: “We are at a distinctive point in the relationship between information and democracy: As the volume of information dissemination has grown, so too have attempts by individuals and groups to weaponize disinformation for commercial and political purposes. This has contributed to fragmentation, political polarization, cynicism, and distrust in institutions and expertise, as a recent Pew Research Center report found. So what is the solution?

Footnotes.

Outside of academics and lawyers, few people may think about footnotes once they leave school. Indeed, there is a hackneyed caricature about footnotes as pedantry, the purview of tweedy scholars blinking as we emerge from fluorescent-lit libraries into the sun — not the concern of regular folks. A recent essay in the Economist even laid some of Britain’s recent woes at the feet of historians who spend too much time “fiddling with footnotes.”

But nothing could be further from the truth. More than ever, we need what this tool provides: accountability and transparency. “Fiddling with footnotes” is the kind of hygienic practice that our era of information pollution needs — and needs to be shared as widely as possible. Footnotes are for everyone.

Though they began as an elite practice, footnotes became aligned historically with modern democracy itself. Citation is rooted in the 17th-century emergence of enlightenment science, which asked for evidence rather than faith as key to supporting a conclusion. It was an era when scientific empiricism threatened the authority of government and religious institutions and newly developing institutional science publications, the Philosophical Transactions of the Royal Society, for example, began to use citations for evidence and reference. In one of Isaac Newton’s contributions to the journal in 1673, a reply to queries about his work on light and the color spectrum, he used citations to his initial publication on the subject (“see no. 80. Page 3075”).

By the 18th century, and with more agile printing, the majority of scientific publications included citations, and the bottom of the page was emerging as the preferred placement. Where scientific scholarship traveled, humanists were not far behind. The disdain of French philosopher and mathematician René Descartes for any discipline without rigorous methods was part of the prompt for historians to embrace citations….(More)”.

Misinformation Has Created a New World Disorder


Claire Wardle at Scientific American: “…Online misinformation has been around since the mid-1990s. But in 2016 several events made it broadly clear that darker forces had emerged: automation, microtargeting and coordination were fueling information campaigns designed to manipulate public opinion at scale. Journalists in the Philippines started raising flags as Rodrigo Duterte rose to power, buoyed by intensive Facebook activity. This was followed by unexpected results in the Brexit referendum in June and then the U.S. presidential election in November—all of which sparked researchers to systematically investigate the ways in which information was being used as a weapon.

During the past three years the discussion around the causes of our polluted information ecosystem has focused almost entirely on actions taken (or not taken) by the technology companies. But this fixation is too simplistic. A complex web of societal shifts is making people more susceptible to misinformation and conspiracy. Trust in institutions is falling because of political and economic upheaval, most notably through ever widening income inequality. The effects of climate change are becoming more pronounced. Global migration trends spark concern that communities will change irrevocably. The rise of automation makes people fear for their jobs and their privacy.

Bad actors who want to deepen existing tensions understand these societal trends, designing content that they hope will so anger or excite targeted users that the audience will become the messenger. The goal is that users will use their own social capital to reinforce and give credibility to that original message.

Most of this content is designed not to persuade people in any particular direction but to cause confusion, to overwhelm and to undermine trust in democratic institutions from the electoral system to journalism. And although much is being made about preparing the U.S. electorate for the 2020 election, misleading and conspiratorial content did not begin with the 2016 presidential race, and it will not end after this one. As tools designed to manipulate and amplify content become cheaper and more accessible, it will be even easier to weaponize users as unwitting agents of disinformation….(More)”.

Credit: Jen Christiansen; Source: Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking, by Claire Wardle and Hossein Derakhshan. Council of Europe, October 2017

After Technopoly


Alan Jacobs at the New Atlantis: “Technocratic solutionism is dying. To replace it, we must learn again the creation and reception of myth….
What Neil Postman called “technopoly” may be described as the universal and virtually inescapable rule of our everyday lives by those who make and deploy technology, especially, in this moment, the instruments of digital communication. It is difficult for us to grasp what it’s like to live under technopoly, or how to endure or escape or resist the regime. These questions may best be approached by drawing on a handful of concepts meant to describe a slightly earlier stage of our common culture.

First, following on my earlier essay in these pages, “Wokeness and Myth on Campus” (Summer/Fall 2017), I want to turn again to a distinction by the Polish philosopher Leszek Kołakowski between the “technological core” of culture and the “mythical core” — a distinction he believed is essential to understanding many cultural developments.

“Technology” for Kołakowski is something broader than we usually mean by it. It describes a stance toward the world in which we view things around us as objects to be manipulated, or as instruments for manipulating our environment and ourselves. This is not necessarily meant in a negative sense; some things ought to be instruments — the spoon I use to stir my soup — and some things need to be manipulated — the soup in need of stirring. Besides tools, the technological core of culture includes also the sciences and most philosophy, as those too are governed by instrumental, analytical forms of reasoning by which we seek some measure of control.

By contrast, the mythical core of culture is that aspect of experience that is not subject to manipulation, because it is prior to our instrumental reasoning about our environment. Throughout human civilization, says Kołakowski, people have participated in myth — they may call it “illumination” or “awakening” or something else — as a way of connecting with “nonempirical unconditioned reality.” It is something we enter into with our full being, and all attempts to describe the experience in terms of desire, will, understanding, or literal meaning are ways of trying to force the mythological core into the technological core by analyzing and rationalizing myth and pressing it into a logical order. This is why the two cores are always in conflict, and it helps to explain why rational argument is often a fruitless response to people acting from the mythical core….(More)”.