Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems


Paper by Tim Wu: “Software has partially or fully displaced many former human activities, such as catching speeders or flying airplanes, and proven itself able to surpass humans in certain contests, like Chess and Jeopardy. What are the prospects for the displacement of human courts as the centerpiece of legal decision-making?

Based on the case study of hate speech control on major tech platforms, particularly on Twitter and Facebook, this Essay suggests displacement of human courts remains a distant prospect, but suggests that hybrid machine–human systems are the predictable future of legal adjudication, and that there lies some hope in that combination, if done well….(More)”.

The Downside of Tech Hype


Jeffrey Funk at Scientific American: “Science and technology have been the largest drivers of economic growth for more than 100 years. But this contribution seems to be declining. Growth in labor productivity has slowed, corporate revenue growth per research dollar has fallen, the value of Nobel Prize–winning research has declined, and the number of researchers needed to develop new molecular entities (e.g., drugs) and same percentage improvements in crop yields and numbers of transistors on a microprocessor chip (commonly known as Moore’s Law) has risen. More recently, the percentage of profitable start-ups at the time of their initial public stock offering has dropped to record lows, not seen since the dot-com bubble and start-ups such as Uber, Lyft and WeWork have accumulated losses much larger than ever seen by start-ups, including Amazon.

Although the reasons for these changes are complex and unclear, one thing is certain: excessive hype about new technologies makes it harder for scientists, engineers and policy makers to objectively analyze and understand these changes, or to make good decisions about new technologies.

One driver of hype is the professional incentives of venture capitalists, entrepreneurs, consultants and universities. Venture capitalists have convinced decision makers that venture capitalist funding and start-ups are the new measures of their success. Professional and business service consultants hype technology for both incumbents and start-ups to make potential clients believe that new technologies make existing strategies, business models and worker skills obsolete every few years.

Universities are themselves a major source of hype. Their public relations offices often exaggerate the results of research papers, commonly implying that commercialization is close at hand, even though the researchers know it will take many years if not decades. Science and engineering courses often imply an easy path to commercialization, while misleading and inaccurate forecasts from Technology Review and Scientific American make it easier for business schools and entrepreneurship programs to claim that opportunities are everywhere and that incumbent firms are regularly being disrupted. With a growth in entrepreneurship programs from about 16 in 1970 to more than 2,000 in 2014, many young people now believe that being an entrepreneur is the cool thing to be, regardless of whether they have a good idea.

Hype from these types of experts is exacerbated by the growth of social media, the falling cost of website creation, blogging, posting of slides and videos and the growing number of technology news, investor and consulting websites….(More)”.

Defining concepts of the digital society


A special section of Internet Policy Review edited by Christian Katzenbach and Thomas Christian Bächle: “With this new special section Defining concepts of the digital society in Internet Policy Review, we seek to foster a platform that provides and validates exactly these overarching frameworks and theories. Based on the latest research, yet broad in scope, the contributions offer effective tools to analyse the digital society. Their authors offer concise articles that portray and critically discuss individual concepts with an interdisciplinary mindset. Each article contextualises their origin and academic traditions, analyses their contemporary usage in different research approaches and discusses their social, political, cultural, ethical or economic relevance and impact as well as their analytical value. With this, the authors are building bridges between the disciplines, between research and practice as well as between innovative explanations and their conceptual heritage….(More)”

Algorithmic governance
Christian Katzenbach, Alexander von Humboldt Institute for Internet and Society
Lena Ulbricht, Berlin Social Science Center

Datafication
Ulises A. Mejias, State University of New York at Oswego
Nick Couldry, London School of Economics & Political Science

Filter bubble
Axel Bruns, Queensland University of Technology

Platformisation
Thomas Poell, University of Amsterdam
David Nieborg, University of Toronto
José van Dijck, Utrecht University

Privacy
Tobias Matzner, University of Paderborn
Carsten Ochs, University of Kassel

The Rising Threat of Digital Nationalism


Essay by Akash Kapur in the Wall Street Journal: “Fifty years ago this week, at 10:30 on a warm night at the University of California, Los Angeles, the first email was sent. It was a decidedly local affair. A man sat in front of a teleprinter connected to an early precursor of the internet known as Arpanet and transmitted the message “login” to a colleague in Palo Alto. The system crashed; all that arrived at the Stanford Research Institute, some 350 miles away, was a truncated “lo.”

The network has moved on dramatically from those parochial—and stuttering—origins. Now more than 200 billion emails flow around the world every day. The internet has come to represent the very embodiment of globalization—a postnational public sphere, a virtual world impervious and even hostile to the control of sovereign governments (those “weary giants of flesh and steel,” as the cyberlibertarian activist John Perry Barlow famously put it in his Declaration of the Independence of Cyberspace in 1996).

But things have been changing recently. Nicholas Negroponte, a co-founder of the MIT Media Lab, once said that national law had no place in cyberlaw. That view seems increasingly anachronistic. Across the world, nation-states have been responding to a series of crises on the internet (some real, some overstated) by asserting their authority and claiming various forms of digital sovereignty. A network that once seemed to effortlessly defy regulation is being relentlessly, and often ruthlessly, domesticated.

From firewalls to shutdowns to new data-localization laws, a specter of digital nationalism now hangs over the network. This “territorialization of the internet,” as Scott Malcomson, a technology consultant and author, calls it, is fundamentally changing its character—and perhaps even threatening its continued existence as a unified global infrastructure.

The phenomenon of digital nationalism isn’t entirely new, of course. Authoritarian governments have long sought to rein in the internet. China has been the pioneer. Its Great Firewall, which restricts what people can read and do online, has served as a model for promoting what the country calls “digital sovereignty.” China’s efforts have had a powerful demonstration effect, showing other autocrats that the internet can be effectively controlled. China has also proved that powerful tech multinationals will exchange their stated principles for market access and that limiting online globalization can spur the growth of a vibrant domestic tech industry.

Several countries have built—or are contemplating—domestic networks modeled on the Chinese example. To control contact with the outside world and suppress dissident content, Iran has set up a so-called “halal net,” North Korea has its Kwangmyong network, and earlier this year, Vladimir Putin signed a “sovereign internet bill” that would likewise set up a self-sufficient Runet. The bill also includes a “kill switch” to shut off the global network to Russian users. This is an increasingly common practice. According to the New York Times, at least a quarter of the world’s countries have temporarily shut down the internet over the past four years….(More)”

AI script finds bias in movies before production starts


Springwise:The GD-IQ (Geena Davis Inclusion Quotient) Spellcheck for Bias analysis tool reviews film and television scripts for equality and diversity. Geena Davis, the founder of the Geena Davis Institute on Gender in Media, recently announced a yearlong pilot programme with Walt Disney Studios. The Spellcheck for Bias tool will be used throughout the studio’s development process.

Funded by Google, the GD-IQ uses audio-visual processing technologies from the University of Southern California Viterbi School of Engineering together with Google’s machine learning capabilities. 

The tool’s analysis reveals the percentages of representation and dialogue broken down into categories of gender, race, LGBTQIA and disability representation. The analysis also highlights non-gender identified speaking characters that could help improve equality and diversity. 

Designed to help identify unconscious bias before it becomes a publicly consumed piece of media, the tool also ranks the sophistication of the characters’ vocabulary and their relative level of power within the story.

The first study of film and television representation using the GD-IQ examined the top 200 grossing, non-animated films of 2014 and 2015. Unsurprisingly, the more diverse and equal a film’s characters were, the more money the film earned. …(More)”.

Merging the ‘Social’ and the ‘Public’: How Social Media Platforms Could Be a New Public Forum


Paper by Amélie Pia Heldt: “When Facebook and other social media sites announced in August 2018 they would ban extremist speakers such as conspiracy theorist Alex Jones for violating their rules against hate speech, reactions were strong. Either they would criticize that such measures were only a drop in the bucket with regards to toxic and harmful speech online, or they would despise Facebook & Co. for penalizing only right-wing speakers, hence censoring political opinions and joining some type of anti-conservative media conglomerate. This anecdote foremost begged the question: Should someone like Alex Jones be excluded from Facebook? And the question “should” includes the one of “may Facebook exclude users for publishing political opinions?”.

As social media platforms take up more and more space in our daily lives, enabling not only individual and mass communication, but also offering payment and other services, there is still a need for a common understanding with regards to the social and communicative space they create in cyberspace. By common I mean on a global scale since this is the way most social media platforms operate or aim for (see Facebook’s mission statement: “bring the world closer together”). While in social science a new digital sphere was proclaimed and social media platforms can be categorized as “personal publics”, there is no such denomination in legal scholarship that is globally agreed upon. Public space can be defined as a free room between the state and society, as a space for freedom. Generally, it is where individuals are protected by their fundamental rights while operating in the public sphere. However, terms like forum, space, and sphere may not be used as synonyms in this discussion. Under the First Amendment, the public forum doctrine mainly serves the purposes of democracy and truth and could be perpetuated in communication services that promote direct dialogue between the state and citizens. But where and by whom is the public forum guaranteed in cyberspace? The notion of the public space in cyberspace is central and it constantly evolves as platforms become broader in their services, hence it needs to be examined more closely. When looking at social media platforms we need to take into account how they moderate speech and subsequently how they influence social processes. If representative democracies are built on the grounds of deliberation, it is essential to safeguard the room for public discourse to actually happen. Are constitutional concepts for the analog space transferable into the digital? Should private actors such as social media platforms be bound by freedom of speech without being considered state actors? And, accordingly, create a new type of public forum?

The goal of this article is to provide answers to the questions mentioned….(More)”.

Information Wars: How We Lost the Global Battle Against Disinformation and What We Can Do About It


Book by Richard Stengel: “Disinformation is as old as humanity. When Satan told Eve nothing would happen if she bit the apple, that was disinformation. But the rise of social media has made disinformation even more pervasive and pernicious in our current era. In a disturbing turn of events, governments are increasingly using disinformation to create their own false narratives, and democracies are proving not to be very good at fighting it.

During the final three years of the Obama administration, Richard Stengel, the former editor of Time and an Under Secretary of State, was on the front lines of this new global information war. At the time, he was the single person in government tasked with unpacking, disproving, and combating both ISIS’s messaging and Russian disinformation. Then, in 2016, as the presidential election unfolded, Stengel watched as Donald Trump used disinformation himself, weaponizing the grievances of Americans who felt left out by modernism. In fact, Stengel quickly came to see how all three players had used the same playbook: ISIS sought to make Islam great again; Putin tried to make Russia great again; and we all know about Trump.

In a narrative that is by turns dramatic and eye-opening, Information Wars walks readers through of this often frustrating battle. Stengel moves through Russia and Ukraine, Saudi Arabia and Iraq, and introduces characters from Putin to Hillary Clinton, John Kerry and Mohamed bin Salman to show how disinformation is impacting our global society. He illustrates how ISIS terrorized the world using social media, and how the Russians launched a tsunami of disinformation around the annexation of Crimea – a scheme that became the model for their interference with the 2016 presidential election. An urgent book for our times, Information Wars stresses that we must find a way to combat this ever growing threat to democracy….(More)”.

Democratic Transparency in the Platform Society


Chapter by Robert Gorwa and Timothy Garton Ash: “Following an host of major scandals, transparency has emerged in recent years as one of the leading accountability mechanisms through which the companies operating global platforms for user-generated content have attempted to regain the trust of the public, politicians, and regulatory authorities. Ranging from Facebook’s efforts to partner with academics and create a reputable mechanism for third party data access and independent research to the expanded advertising disclosure tools being built for elections around the world, transparency is playing a major role in current governance debates around free expression, social media, and democracy.

This article thus seeks to (a) contextualize the recent implementation of transparency as enacted by platform companies with an overview of the ample relevant literature on digital transparency in both theory and practice; (b) consider the potential positive governance impacts of transparency as a form of accountability in the current political moment; and (c) reflect upon the potential shortfalls of transparency that should be considered by legislators, academics, and funding bodies weighing the relative benefits of policy or research dealing with transparency in this area…(More)”.

We Need a PBS for Social Media


Mark Coatney at the New York Times: “Social media is an opportunity wrapped in a problem. YouTube spreads propaganda and is toxic to children. Twitter spreads propaganda and is toxic to racial relationsFacebook spreads propaganda and is toxic to democracy itself.

Such problems aren’t surprising when you consider that all these companies operate on the same basic model: Create a product that maximizes the attention you can command from a person, collect as much data as you can about that person, and sell it.

Proposed solutions like breaking up companies and imposing regulation have been met with resistance: The platforms, understandably, worry that their profits might be reduced from staggering to merely amazing. And this may not be the best course of action anyway.

What if the problem is something that can’t be solved by existing for-profit media platforms? Maybe the answer to fixing social media isn’t trying to change companies with business models built around products that hijack our attention, and instead work to create a less toxic alternative.

Nonprofit public media is part of the answer. More than 50 years ago, President Lyndon Johnson signed the Public Broadcasting Act, committing federal funds to create public television and radio that would “be responsive to the interests of people.”

It isn’t a big leap to expand “public media” to include not just television and radio but also social media. In 2019, the definition of “media” is considerably larger than it was in 1967. Commentary on Twitter, memes on Instagram and performances on TikTok are all as much a part of the media landscape today as newspapers and television news.

Public media came out of a recognition that the broadcasting spectrum is a finite resource. TV broadcasters given licenses to use the spectrum were expected to provide programming like news and educational shows in return. But that was not enough. To make sure that some of that finite resource would always be used in the public interest, Congress established public media.

Today, the limited resource isn’t the spectrum — it’s our attention….(More)”.

Big Data, Political Campaigning and the Law


Book edited by Normann Witzleb, Moira Paterson, and Janice Richardson on “Democracy and Privacy in the Age of Micro-Targeting”…: “In this multidisciplinary book, experts from around the globe examine how data-driven political campaigning works, what challenges it poses for personal privacy and democracy, and how emerging practices should be regulated.

The rise of big data analytics in the political process has triggered official investigations in many countries around the world, and become the subject of broad and intense debate. Political parties increasingly rely on data analytics to profile the electorate and to target specific voter groups with individualised messages based on their demographic attributes. Political micro-targeting has become a major factor in modern campaigning, because of its potential to influence opinions, to mobilise supporters and to get out votes. The book explores the legal, philosophical and political dimensions of big data analytics in the electoral process. It demonstrates that the unregulated use of big personal data for political purposes not only infringes voters’ privacy rights, but also has the potential to jeopardise the future of the democratic process, and proposes reforms to address the key regulatory and ethical questions arising from the mining, use and storage of massive amounts of voter data.

Providing an interdisciplinary assessment of the use and regulation of big data in the political process, this book will appeal to scholars from law, political science, political philosophy, and media studies, policy makers and anyone who cares about democracy in the age of data-driven political campaigning….(More)”.