News as Surveillance


Paper by Erin Carroll: “As inhabitants of the Information Age, we are increasingly aware of the amount and kind of data that technology platforms collect on us. Far less publicized, however, is how much data news organizations collect on us as we read the news online and how they allow third parties to collect that personal data as well. A handful of studies by computer scientists reveal that, as a group, news websites are among the Internet’s worst offenders when it comes to tracking their visitors.

On the one hand, this surveillance is unsurprising. It is capitalism at work. The press’s business model has long been advertising-based. Yet, today this business model raises particular First Amendment concerns. The press, a named beneficiary of the First Amendment and a First Amendment institution, is gathering user reading history. This is a violation of what legal scholars call “intellectual privacy”—a right foundational to our First Amendment free speech rights.

And because of the perpetrator, this surveillance has the potential to cause far-reaching harms. Not only does it injure the individual reader or citizen, it injures society. News consumption helps each of us engage in the democratic process. It is, in fact, practically a prerequisite to our participation. Moreover, for an institution whose success is dependent on its readers’ trust, one that checks abuses of power, this surveillance seems like a special brand of betrayal.

Rather than an attack on journalists or journalism, this Essay is an attack on a particular press business model. It is also a call to grapple with it before the press faces greater public backlash. Originally given as the keynote for the Washburn Law Journal’s symposium, The Future of Cyber Speech, Media, and Privacy, this Essay argues for transforming and diversifying press business models and offers up other suggestions for minimizing the use of news as surveillance…(More)”.

Shining light into the dark spaces of chat apps


Sharon Moshavi at Columbia Journalism Review: “News has migrated from print to the web to social platforms to mobile. Now, at the dawn of a new decade, it is heading to a place that presents a whole new set of challenges: the private, hidden spaces of instant messaging apps.  

WhatsApp, Facebook Messenger, Telegram, and their ilk are platforms that journalists cannot ignore — even in the US, where chat-app usage is low. “I believe a privacy-focused communications platform will become even more important than today’s open platforms,” Mark Zuckerberg, Facebook’s CEO, wrote in March 2019. By 2022, three billion people will be using them on a regular basis, according to Statista

But fewer journalists worldwide are using these platforms to disseminate news than they were two years ago, as ICFJ discovered in its 2019 “State of Technology in Global Newsrooms” survey. That’s a particularly dangerous trend during an election year, because messaging apps are potential minefields of misinformation. 

American journalists should take stock of recent elections in India and Brazil, ahead of which misinformation flooded WhatsApp. ICFJ’s “TruthBuzz” projects found coordinated and widespread disinformation efforts using text, videos, and photos on that platform.  

It is particularly troubling given that more people now use it as a primary source for information. In Brazil, one in four internet users consult WhatsApp weekly as a news source. A recent report from New York University’s Center for Business and Human Rights warned that WhatsApp “could become a troubling source of false content in the US, as it has been during elections in Brazil and India.” It’s imperative that news media figure out how to map the contours of these opaque, unruly spaces, and deliver fact-based news to those who congregate there….(More)”.

Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems


Paper by Tim Wu: “Software has partially or fully displaced many former human activities, such as catching speeders or flying airplanes, and proven itself able to surpass humans in certain contests, like Chess and Jeopardy. What are the prospects for the displacement of human courts as the centerpiece of legal decision-making?

Based on the case study of hate speech control on major tech platforms, particularly on Twitter and Facebook, this Essay suggests displacement of human courts remains a distant prospect, but suggests that hybrid machine–human systems are the predictable future of legal adjudication, and that there lies some hope in that combination, if done well….(More)”.

The Downside of Tech Hype


Jeffrey Funk at Scientific American: “Science and technology have been the largest drivers of economic growth for more than 100 years. But this contribution seems to be declining. Growth in labor productivity has slowed, corporate revenue growth per research dollar has fallen, the value of Nobel Prize–winning research has declined, and the number of researchers needed to develop new molecular entities (e.g., drugs) and same percentage improvements in crop yields and numbers of transistors on a microprocessor chip (commonly known as Moore’s Law) has risen. More recently, the percentage of profitable start-ups at the time of their initial public stock offering has dropped to record lows, not seen since the dot-com bubble and start-ups such as Uber, Lyft and WeWork have accumulated losses much larger than ever seen by start-ups, including Amazon.

Although the reasons for these changes are complex and unclear, one thing is certain: excessive hype about new technologies makes it harder for scientists, engineers and policy makers to objectively analyze and understand these changes, or to make good decisions about new technologies.

One driver of hype is the professional incentives of venture capitalists, entrepreneurs, consultants and universities. Venture capitalists have convinced decision makers that venture capitalist funding and start-ups are the new measures of their success. Professional and business service consultants hype technology for both incumbents and start-ups to make potential clients believe that new technologies make existing strategies, business models and worker skills obsolete every few years.

Universities are themselves a major source of hype. Their public relations offices often exaggerate the results of research papers, commonly implying that commercialization is close at hand, even though the researchers know it will take many years if not decades. Science and engineering courses often imply an easy path to commercialization, while misleading and inaccurate forecasts from Technology Review and Scientific American make it easier for business schools and entrepreneurship programs to claim that opportunities are everywhere and that incumbent firms are regularly being disrupted. With a growth in entrepreneurship programs from about 16 in 1970 to more than 2,000 in 2014, many young people now believe that being an entrepreneur is the cool thing to be, regardless of whether they have a good idea.

Hype from these types of experts is exacerbated by the growth of social media, the falling cost of website creation, blogging, posting of slides and videos and the growing number of technology news, investor and consulting websites….(More)”.

Defining concepts of the digital society


A special section of Internet Policy Review edited by Christian Katzenbach and Thomas Christian Bächle: “With this new special section Defining concepts of the digital society in Internet Policy Review, we seek to foster a platform that provides and validates exactly these overarching frameworks and theories. Based on the latest research, yet broad in scope, the contributions offer effective tools to analyse the digital society. Their authors offer concise articles that portray and critically discuss individual concepts with an interdisciplinary mindset. Each article contextualises their origin and academic traditions, analyses their contemporary usage in different research approaches and discusses their social, political, cultural, ethical or economic relevance and impact as well as their analytical value. With this, the authors are building bridges between the disciplines, between research and practice as well as between innovative explanations and their conceptual heritage….(More)”

Algorithmic governance
Christian Katzenbach, Alexander von Humboldt Institute for Internet and Society
Lena Ulbricht, Berlin Social Science Center

Datafication
Ulises A. Mejias, State University of New York at Oswego
Nick Couldry, London School of Economics & Political Science

Filter bubble
Axel Bruns, Queensland University of Technology

Platformisation
Thomas Poell, University of Amsterdam
David Nieborg, University of Toronto
José van Dijck, Utrecht University

Privacy
Tobias Matzner, University of Paderborn
Carsten Ochs, University of Kassel

The Rising Threat of Digital Nationalism


Essay by Akash Kapur in the Wall Street Journal: “Fifty years ago this week, at 10:30 on a warm night at the University of California, Los Angeles, the first email was sent. It was a decidedly local affair. A man sat in front of a teleprinter connected to an early precursor of the internet known as Arpanet and transmitted the message “login” to a colleague in Palo Alto. The system crashed; all that arrived at the Stanford Research Institute, some 350 miles away, was a truncated “lo.”

The network has moved on dramatically from those parochial—and stuttering—origins. Now more than 200 billion emails flow around the world every day. The internet has come to represent the very embodiment of globalization—a postnational public sphere, a virtual world impervious and even hostile to the control of sovereign governments (those “weary giants of flesh and steel,” as the cyberlibertarian activist John Perry Barlow famously put it in his Declaration of the Independence of Cyberspace in 1996).

But things have been changing recently. Nicholas Negroponte, a co-founder of the MIT Media Lab, once said that national law had no place in cyberlaw. That view seems increasingly anachronistic. Across the world, nation-states have been responding to a series of crises on the internet (some real, some overstated) by asserting their authority and claiming various forms of digital sovereignty. A network that once seemed to effortlessly defy regulation is being relentlessly, and often ruthlessly, domesticated.

From firewalls to shutdowns to new data-localization laws, a specter of digital nationalism now hangs over the network. This “territorialization of the internet,” as Scott Malcomson, a technology consultant and author, calls it, is fundamentally changing its character—and perhaps even threatening its continued existence as a unified global infrastructure.

The phenomenon of digital nationalism isn’t entirely new, of course. Authoritarian governments have long sought to rein in the internet. China has been the pioneer. Its Great Firewall, which restricts what people can read and do online, has served as a model for promoting what the country calls “digital sovereignty.” China’s efforts have had a powerful demonstration effect, showing other autocrats that the internet can be effectively controlled. China has also proved that powerful tech multinationals will exchange their stated principles for market access and that limiting online globalization can spur the growth of a vibrant domestic tech industry.

Several countries have built—or are contemplating—domestic networks modeled on the Chinese example. To control contact with the outside world and suppress dissident content, Iran has set up a so-called “halal net,” North Korea has its Kwangmyong network, and earlier this year, Vladimir Putin signed a “sovereign internet bill” that would likewise set up a self-sufficient Runet. The bill also includes a “kill switch” to shut off the global network to Russian users. This is an increasingly common practice. According to the New York Times, at least a quarter of the world’s countries have temporarily shut down the internet over the past four years….(More)”

AI script finds bias in movies before production starts


Springwise:The GD-IQ (Geena Davis Inclusion Quotient) Spellcheck for Bias analysis tool reviews film and television scripts for equality and diversity. Geena Davis, the founder of the Geena Davis Institute on Gender in Media, recently announced a yearlong pilot programme with Walt Disney Studios. The Spellcheck for Bias tool will be used throughout the studio’s development process.

Funded by Google, the GD-IQ uses audio-visual processing technologies from the University of Southern California Viterbi School of Engineering together with Google’s machine learning capabilities. 

The tool’s analysis reveals the percentages of representation and dialogue broken down into categories of gender, race, LGBTQIA and disability representation. The analysis also highlights non-gender identified speaking characters that could help improve equality and diversity. 

Designed to help identify unconscious bias before it becomes a publicly consumed piece of media, the tool also ranks the sophistication of the characters’ vocabulary and their relative level of power within the story.

The first study of film and television representation using the GD-IQ examined the top 200 grossing, non-animated films of 2014 and 2015. Unsurprisingly, the more diverse and equal a film’s characters were, the more money the film earned. …(More)”.

Merging the ‘Social’ and the ‘Public’: How Social Media Platforms Could Be a New Public Forum


Paper by Amélie Pia Heldt: “When Facebook and other social media sites announced in August 2018 they would ban extremist speakers such as conspiracy theorist Alex Jones for violating their rules against hate speech, reactions were strong. Either they would criticize that such measures were only a drop in the bucket with regards to toxic and harmful speech online, or they would despise Facebook & Co. for penalizing only right-wing speakers, hence censoring political opinions and joining some type of anti-conservative media conglomerate. This anecdote foremost begged the question: Should someone like Alex Jones be excluded from Facebook? And the question “should” includes the one of “may Facebook exclude users for publishing political opinions?”.

As social media platforms take up more and more space in our daily lives, enabling not only individual and mass communication, but also offering payment and other services, there is still a need for a common understanding with regards to the social and communicative space they create in cyberspace. By common I mean on a global scale since this is the way most social media platforms operate or aim for (see Facebook’s mission statement: “bring the world closer together”). While in social science a new digital sphere was proclaimed and social media platforms can be categorized as “personal publics”, there is no such denomination in legal scholarship that is globally agreed upon. Public space can be defined as a free room between the state and society, as a space for freedom. Generally, it is where individuals are protected by their fundamental rights while operating in the public sphere. However, terms like forum, space, and sphere may not be used as synonyms in this discussion. Under the First Amendment, the public forum doctrine mainly serves the purposes of democracy and truth and could be perpetuated in communication services that promote direct dialogue between the state and citizens. But where and by whom is the public forum guaranteed in cyberspace? The notion of the public space in cyberspace is central and it constantly evolves as platforms become broader in their services, hence it needs to be examined more closely. When looking at social media platforms we need to take into account how they moderate speech and subsequently how they influence social processes. If representative democracies are built on the grounds of deliberation, it is essential to safeguard the room for public discourse to actually happen. Are constitutional concepts for the analog space transferable into the digital? Should private actors such as social media platforms be bound by freedom of speech without being considered state actors? And, accordingly, create a new type of public forum?

The goal of this article is to provide answers to the questions mentioned….(More)”.

Information Wars: How We Lost the Global Battle Against Disinformation and What We Can Do About It


Book by Richard Stengel: “Disinformation is as old as humanity. When Satan told Eve nothing would happen if she bit the apple, that was disinformation. But the rise of social media has made disinformation even more pervasive and pernicious in our current era. In a disturbing turn of events, governments are increasingly using disinformation to create their own false narratives, and democracies are proving not to be very good at fighting it.

During the final three years of the Obama administration, Richard Stengel, the former editor of Time and an Under Secretary of State, was on the front lines of this new global information war. At the time, he was the single person in government tasked with unpacking, disproving, and combating both ISIS’s messaging and Russian disinformation. Then, in 2016, as the presidential election unfolded, Stengel watched as Donald Trump used disinformation himself, weaponizing the grievances of Americans who felt left out by modernism. In fact, Stengel quickly came to see how all three players had used the same playbook: ISIS sought to make Islam great again; Putin tried to make Russia great again; and we all know about Trump.

In a narrative that is by turns dramatic and eye-opening, Information Wars walks readers through of this often frustrating battle. Stengel moves through Russia and Ukraine, Saudi Arabia and Iraq, and introduces characters from Putin to Hillary Clinton, John Kerry and Mohamed bin Salman to show how disinformation is impacting our global society. He illustrates how ISIS terrorized the world using social media, and how the Russians launched a tsunami of disinformation around the annexation of Crimea – a scheme that became the model for their interference with the 2016 presidential election. An urgent book for our times, Information Wars stresses that we must find a way to combat this ever growing threat to democracy….(More)”.

Democratic Transparency in the Platform Society


Chapter by Robert Gorwa and Timothy Garton Ash: “Following an host of major scandals, transparency has emerged in recent years as one of the leading accountability mechanisms through which the companies operating global platforms for user-generated content have attempted to regain the trust of the public, politicians, and regulatory authorities. Ranging from Facebook’s efforts to partner with academics and create a reputable mechanism for third party data access and independent research to the expanded advertising disclosure tools being built for elections around the world, transparency is playing a major role in current governance debates around free expression, social media, and democracy.

This article thus seeks to (a) contextualize the recent implementation of transparency as enacted by platform companies with an overview of the ample relevant literature on digital transparency in both theory and practice; (b) consider the potential positive governance impacts of transparency as a form of accountability in the current political moment; and (c) reflect upon the potential shortfalls of transparency that should be considered by legislators, academics, and funding bodies weighing the relative benefits of policy or research dealing with transparency in this area…(More)”.