Researchers Develop Faster Way to Replace Bad Data With Accurate Information


NCSU Press Release: “Researchers from North Carolina State University and the Army Research Office have demonstrated a new model of how competing pieces of information spread in online social networks and the Internet of Things (IoT). The findings could be used to disseminate accurate information more quickly, displacing false information about anything from computer security to public health….

In their paper, the researchers show that a network’s size plays a significant role in how quickly “good” information can displace “bad” information. However, a large network is not necessarily better or worse than a small one. Instead, the speed at which good data travels is primarily affected by the network’s structure.

A highly interconnected network can disseminate new data very quickly. And the larger the network, the faster the new data will travel.

However, in networks that are connected primarily by a limited number of key nodes, those nodes serve as bottlenecks. As a result, the larger this type of network is, the slower the new data will travel.

The researchers also identified an algorithm that can be used to assess which point in a network would allow you to spread new data throughout the network most quickly.

“Practically speaking, this could be used to ensure that an IoT network purges old data as quickly as possible and is operating with new, accurate data,” Wenye Wang says.

“But these findings are also applicable to online social networks, and could be used to facilitate the spread of accurate information regarding subjects that affect the public,” says Jie Wang. “For example, we think it could be used to combat misinformation online.”…(More)”

Full paper: “Modeling and Analysis of Conflicting Information Propagation in a Finite Time Horizon,”

The Law and Economics of Online Republication


Paper by Ronen Perry: “Jerry publishes unlawful content about Newman on Facebook, Elaine shares Jerry’s post, the share automatically turns into a tweet because her Facebook and Twitter accounts are linked, and George immediately retweets it. Should Elaine and George be liable for these republications? The question is neither theoretical nor idiosyncratic. On occasion, it reaches the headlines, as when Jennifer Lawrence’s representatives announced she would sue every person involved in the dissemination, through various online platforms, of her illegally obtained nude pictures. Yet this is only the tip of the iceberg. Numerous potentially offensive items are reposted daily, their exposure expands in widening circles, and they sometimes “go viral.”

This Article is the first to provide a law and economics analysis of the question of liability for online republication. Its main thesis is that liability for republication generates a specter of multiple defendants which might dilute the originator’s liability and undermine its deterrent effect. The Article concludes that, subject to several exceptions and methodological caveats, only the originator should be liable. This seems to be the American rule, as enunciated in Batzel v. Smith and Barrett v. Rosenthal. It stands in stark contrast to the prevalent rules in other Western jurisdictions and has been challenged by scholars on various grounds since its very inception.

The Article unfolds in three Parts. Part I presents the legal framework. It first discusses the rules applicable to republication of self-created content, focusing on the emergence of the single publication rule and its natural extension to online republication. It then turns to republication of third-party content. American law makes a clear-cut distinction between offline republication which gives rise to a new cause of action against the republisher (subject to a few limited exceptions), and online republication which enjoys an almost absolute immunity under § 230 of the Communications Decency Act. Other Western jurisdictions employ more generous republisher liability regimes, which usually require endorsement, a knowing expansion of exposure or repetition.

Part II offers an economic justification for the American model. Law and economics literature has showed that attributing liability for constant indivisible harm to multiple injurers, where each could have single-handedly prevented that harm (“alternative care” settings), leads to dilution of liability. Online republication scenarios often involve multiple tortfeasors. However, they differ from previously analyzed phenomena because they are not alternative care situations, and because the harm—increased by the conduct of each tortfeasor—is not constant and indivisible. Part II argues that neither feature precludes the dilution argument. It explains that the impact of the multiplicity of injurers in the online republication context on liability and deterrence provides a general justification for the American rule. This rule’s relatively low administrative costs afford additional support.

Part III considers the possible limits of the theoretical argument. It maintains that exceptions to the exclusive originator liability rule should be recognized when the originator is unidentifiable or judgment-proof, and when either the republisher’s identity or the republication’s audience was unforeseeable. It also explains that the rule does not preclude liability for positive endorsement with a substantial addition, which constitutes a new original publication, or for the dissemination of illegally obtained content, which is an independent wrong. Lastly, Part III addresses possible challenges to the main argument’s underlying assumptions, namely that liability dilution is a real risk and that it is undesirable….(More)”.

Automation in Moderation


Article by Hannah Bloch-Wehba: “This Article assesses recent efforts to compel or encourage online platforms to use automated means to prevent the dissemination of unlawful online content before it is ever seen or distributed. As lawmakers in Europe and around the world closely scrutinize platforms’ “content moderation” practices, automation and artificial intelligence appear increasingly attractive options for ridding the Internet of many kinds of harmful online content, including defamation, copyright infringement, and terrorist speech. Proponents of these initiatives suggest that requiring platforms to screen user content using automation will promote healthier online discourse and will aid efforts to limit Big Tech’s power.

In fact, however, the regulations that incentivize platforms to use automation in content moderation come with unappreciated costs for civil liberties and unexpected benefits for platforms. The new automation techniques exacerbate existing risks to free speech and user privacy and create ripe new sources of information for surveillance, aggravating threats to free expression, associational rights, religious freedoms, and equality. Automation also worsens transparency and accountability deficits. Far from curtailing private power, the new regulations endorse and expand platform authority to police online speech, with little in the way of oversight and few countervailing checks. New regulations of online intermediaries should therefore incorporate checks on the use of automation to avoid exacerbating these dynamics. Carefully drawn transparency obligations, algorithmic accountability mechanisms, and procedural safeguards can help to ameliorate the effects of these regulations on users and competition…(More)”.

Beyond Takedown: Expanding the Toolkit for Responding to Online Hate


Paper by Molly K. Land and Rebecca J. Hamilton: “The current preoccupation with ‘fake news’ has spurred a renewed emphasis in popular discourse on the potential harms of speech. In the world of international law, however, ‘fake news’ is far from new. Propaganda of various sorts is a well-worn tactic of governments, and in its most insidious form, it has played an instrumental role in inciting and enabling some of the worst atrocities of our time. Yet as familiar as propaganda might be in theory, it is raising new issues as it has migrated to the digital realm. Technological developments have largely outpaced existing legal and political tools for responding to the use of mass communications devices to instigate or perpetrate human rights violations.

This chapter evaluates the current practices of social media companies for responding to online hate, arguing that they are inevitably both overbroad and under-inclusive. Using the example of the role played by Facebook in the recent genocide against the minority Muslim Rohingya population in Myanmar, the chapter illustrates the failure of platform hate speech policies to address pervasive and coordinated online speech, often state-sponsored or state-aligned, denigrating a particular group that is used to justify or foster impunity for violence against that group. Addressing this “conditioning speech” requires a more tailored response that includes remedies other than content removal and account suspensions. The chapter concludes by surveying a range of innovative responses to harmful online content that would give social media platforms the flexibly to intervene earlier, but with a much lighter touch….(More)”.

An Internet for the People: The Politics and Promise of craigslist


Book by Jessa Lingel: “Begun by Craig Newmark as an e-mail to some friends about cool events happening around San Francisco, craigslist is now the leading classifieds service on the planet. It is also a throwback to the early internet. The website has barely seen an upgrade since it launched in 1996. There are no banner ads. The company doesn’t profit off your data. An Internet for the People explores how people use craigslist to buy and sell, find work, and find love—and reveals why craigslist is becoming a lonely outpost in an increasingly corporatized web.

Drawing on interviews with craigslist insiders and ordinary users, Jessa Lingel looks at the site’s history and values, showing how it has mostly stayed the same while the web around it has become more commercial and far less open. She examines craigslist’s legal history, describing the company’s courtroom battles over issues of freedom of expression and data privacy, and explains the importance of locality in the social relationships fostered by the site. More than an online garage sale, job board, or dating site, craigslist holds vital lessons for the rest of the web. It is a website that values user privacy over profits, ease of use over slick design, and an ethos of the early web that might just hold the key to a more open, transparent, and democratic internet….(More)”.

News as Surveillance


Paper by Erin Carroll: “As inhabitants of the Information Age, we are increasingly aware of the amount and kind of data that technology platforms collect on us. Far less publicized, however, is how much data news organizations collect on us as we read the news online and how they allow third parties to collect that personal data as well. A handful of studies by computer scientists reveal that, as a group, news websites are among the Internet’s worst offenders when it comes to tracking their visitors.

On the one hand, this surveillance is unsurprising. It is capitalism at work. The press’s business model has long been advertising-based. Yet, today this business model raises particular First Amendment concerns. The press, a named beneficiary of the First Amendment and a First Amendment institution, is gathering user reading history. This is a violation of what legal scholars call “intellectual privacy”—a right foundational to our First Amendment free speech rights.

And because of the perpetrator, this surveillance has the potential to cause far-reaching harms. Not only does it injure the individual reader or citizen, it injures society. News consumption helps each of us engage in the democratic process. It is, in fact, practically a prerequisite to our participation. Moreover, for an institution whose success is dependent on its readers’ trust, one that checks abuses of power, this surveillance seems like a special brand of betrayal.

Rather than an attack on journalists or journalism, this Essay is an attack on a particular press business model. It is also a call to grapple with it before the press faces greater public backlash. Originally given as the keynote for the Washburn Law Journal’s symposium, The Future of Cyber Speech, Media, and Privacy, this Essay argues for transforming and diversifying press business models and offers up other suggestions for minimizing the use of news as surveillance…(More)”.

Shining light into the dark spaces of chat apps


Sharon Moshavi at Columbia Journalism Review: “News has migrated from print to the web to social platforms to mobile. Now, at the dawn of a new decade, it is heading to a place that presents a whole new set of challenges: the private, hidden spaces of instant messaging apps.  

WhatsApp, Facebook Messenger, Telegram, and their ilk are platforms that journalists cannot ignore — even in the US, where chat-app usage is low. “I believe a privacy-focused communications platform will become even more important than today’s open platforms,” Mark Zuckerberg, Facebook’s CEO, wrote in March 2019. By 2022, three billion people will be using them on a regular basis, according to Statista

But fewer journalists worldwide are using these platforms to disseminate news than they were two years ago, as ICFJ discovered in its 2019 “State of Technology in Global Newsrooms” survey. That’s a particularly dangerous trend during an election year, because messaging apps are potential minefields of misinformation. 

American journalists should take stock of recent elections in India and Brazil, ahead of which misinformation flooded WhatsApp. ICFJ’s “TruthBuzz” projects found coordinated and widespread disinformation efforts using text, videos, and photos on that platform.  

It is particularly troubling given that more people now use it as a primary source for information. In Brazil, one in four internet users consult WhatsApp weekly as a news source. A recent report from New York University’s Center for Business and Human Rights warned that WhatsApp “could become a troubling source of false content in the US, as it has been during elections in Brazil and India.” It’s imperative that news media figure out how to map the contours of these opaque, unruly spaces, and deliver fact-based news to those who congregate there….(More)”.

Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems


Paper by Tim Wu: “Software has partially or fully displaced many former human activities, such as catching speeders or flying airplanes, and proven itself able to surpass humans in certain contests, like Chess and Jeopardy. What are the prospects for the displacement of human courts as the centerpiece of legal decision-making?

Based on the case study of hate speech control on major tech platforms, particularly on Twitter and Facebook, this Essay suggests displacement of human courts remains a distant prospect, but suggests that hybrid machine–human systems are the predictable future of legal adjudication, and that there lies some hope in that combination, if done well….(More)”.

The Downside of Tech Hype


Jeffrey Funk at Scientific American: “Science and technology have been the largest drivers of economic growth for more than 100 years. But this contribution seems to be declining. Growth in labor productivity has slowed, corporate revenue growth per research dollar has fallen, the value of Nobel Prize–winning research has declined, and the number of researchers needed to develop new molecular entities (e.g., drugs) and same percentage improvements in crop yields and numbers of transistors on a microprocessor chip (commonly known as Moore’s Law) has risen. More recently, the percentage of profitable start-ups at the time of their initial public stock offering has dropped to record lows, not seen since the dot-com bubble and start-ups such as Uber, Lyft and WeWork have accumulated losses much larger than ever seen by start-ups, including Amazon.

Although the reasons for these changes are complex and unclear, one thing is certain: excessive hype about new technologies makes it harder for scientists, engineers and policy makers to objectively analyze and understand these changes, or to make good decisions about new technologies.

One driver of hype is the professional incentives of venture capitalists, entrepreneurs, consultants and universities. Venture capitalists have convinced decision makers that venture capitalist funding and start-ups are the new measures of their success. Professional and business service consultants hype technology for both incumbents and start-ups to make potential clients believe that new technologies make existing strategies, business models and worker skills obsolete every few years.

Universities are themselves a major source of hype. Their public relations offices often exaggerate the results of research papers, commonly implying that commercialization is close at hand, even though the researchers know it will take many years if not decades. Science and engineering courses often imply an easy path to commercialization, while misleading and inaccurate forecasts from Technology Review and Scientific American make it easier for business schools and entrepreneurship programs to claim that opportunities are everywhere and that incumbent firms are regularly being disrupted. With a growth in entrepreneurship programs from about 16 in 1970 to more than 2,000 in 2014, many young people now believe that being an entrepreneur is the cool thing to be, regardless of whether they have a good idea.

Hype from these types of experts is exacerbated by the growth of social media, the falling cost of website creation, blogging, posting of slides and videos and the growing number of technology news, investor and consulting websites….(More)”.

Defining concepts of the digital society


A special section of Internet Policy Review edited by Christian Katzenbach and Thomas Christian Bächle: “With this new special section Defining concepts of the digital society in Internet Policy Review, we seek to foster a platform that provides and validates exactly these overarching frameworks and theories. Based on the latest research, yet broad in scope, the contributions offer effective tools to analyse the digital society. Their authors offer concise articles that portray and critically discuss individual concepts with an interdisciplinary mindset. Each article contextualises their origin and academic traditions, analyses their contemporary usage in different research approaches and discusses their social, political, cultural, ethical or economic relevance and impact as well as their analytical value. With this, the authors are building bridges between the disciplines, between research and practice as well as between innovative explanations and their conceptual heritage….(More)”

Algorithmic governance
Christian Katzenbach, Alexander von Humboldt Institute for Internet and Society
Lena Ulbricht, Berlin Social Science Center

Datafication
Ulises A. Mejias, State University of New York at Oswego
Nick Couldry, London School of Economics & Political Science

Filter bubble
Axel Bruns, Queensland University of Technology

Platformisation
Thomas Poell, University of Amsterdam
David Nieborg, University of Toronto
José van Dijck, Utrecht University

Privacy
Tobias Matzner, University of Paderborn
Carsten Ochs, University of Kassel