Regulating Social Media: The Fight Over Section 230 and Beyond


Report by Paul M. Barrett: “Recently, Section 230 of the Communications Decency Act of 1996 has come under sharp attack from members of both political parties, including presidential candidates Donald Trump and Joe Biden. The foundational law of the commercial internet, Section 230 does two things: It protects platforms and websites from most lawsuits related to content posted by third parties. And it guarantees this shield from liability even if the platforms and sites actively police the content they host. This protection has encouraged internet companies to innovate and grow, even as it has raised serious questions about whether social media platforms adequately self-regulate harmful content. In addition to the assaults by Trump and Biden, members of Congress have introduced a number of bills designed to limit the reach of Section 230. Some critics have asserted unrealistically that repealing or curbing Section 230 would solve a wide range of problems relating to internet governance. These critics also have played down the potentialy dire consequences that repeal would have for smaller internet companies. Academics, think tank researchers, and others outside of government have made a variety of more nuanced proposals for revising the law. We assess these ideas with an eye toward recommending and integrating the most promising ones. Our conclusion is that Section 230 ought to be preserved—but that it can be improved…(More)”

How Competition Impacts Data Privacy


Paper by Aline Blankertz: “A small number of large digital platforms increasingly shape the space for most online interactions around the globe and they often act with hardly any constraint from competing services. The lack of competition puts those platforms in a powerful position that may allow them to exploit consumers and offer them limited choice. Privacy is increasingly considered one area in which the lack of competition may create harm. Because of these concerns, governments and other institutions are developing proposals to expand the scope for competition authorities to intervene to limit the power of the large platforms and to revive competition.  


The first case that has explicitly addressed anticompetitive harm to privacy is the German Bundeskartellamt’s case against Facebook in which the authority argues that imposing bad privacy terms can amount to an abuse of dominance. Since that case started in 2016, more cases deal with the link between competition and privacy. For example, the proposed Google/Fitbit merger has raised concerns about sensitive health data being merged with existing Google profiles and Apple is under scrutiny for not sharing certain personal data while using it for its own services.

However, addressing bad privacy outcomes through competition policy is effective only if those outcomes are caused, at least partly, by a lack of competition. Six distinct mechanisms can be distinguished through which competition may affect privacy, as summarized in Table 1. These mechanisms constitute different hypotheses through which less competition may influence privacy outcomes and lead either to worse privacy in different ways (mechanisms 1-5) or even better privacy (mechanism 6). The table also summarizes the available evidence on whether and to what extent the hypothesized effects are present in actual markets….(More)”.

Governance responses to disinformation: How open government principles can inform policy options


OECD paper by Craig Matasick, Carlotta Alfonsi and Alessandro Bellantoni: “This paper provides a holistic policy approach to the challenge of disinformation by exploring a range of governance responses that rest on the open government principles of transparency, integrity, accountability and stakeholder participation. It offers an analysis of the significant changes that are affecting media and information ecosystems, chief among them the growth of digital platforms. Drawing on the implications of this changing landscape, the paper focuses on four policy areas of intervention: public communication for a better dialogue between government and citizens; direct responses to identify and combat disinformation; legal and regulatory policy; and media and civic responses that support better information ecosystems. The paper concludes with proposed steps the OECD can take to build evidence and support policy in this space…(More)”.

The Truth Is Paywalled But The Lies Are Free


Essay by Nathan J. Robinson: “…This means that a lot of the most vital information will end up locked behind the paywall. And while I am not much of a New Yorker fan either, it’s concerning that the Hoover Institute will freely give you Richard Epstein’s infamous article downplaying the threat of coronavirus, but Isaac Chotiner’s interview demolishing Epstein requires a monthly subscription, meaning that the lie is more accessible than its refutation. Eric Levitz of New York is one of the best and most prolific left political commentators we have. But unless you’re a subscriber of New York, you won’t get to hear much of what he has to say each month. 

Possibly even worse is the fact that so much academic writing is kept behind vastly more costly paywalls. A white supremacist on YouTube will tell you all about race and IQ but if you want to read a careful scholarly refutation, obtaining a legal PDF from the journal publisher would cost you $14.95, a price nobody in their right mind would pay for one article if they can’t get institutional access. (I recently gave up on trying to access a scholarly article because I could not find a way to get it for less than $39.95, though in that case the article was garbage rather than gold.) Academic publishing is a nightmarish patchwork, with lots of articles advertised at exorbitant fees on one site, and then for free on another, or accessible only through certain databases, which your university or public library may or may not have access to. (Libraries have to budget carefully because subscription prices are often nuts. A library subscription to the Journal of Coordination Chemistryfor instance, costs $11,367 annually.) 

Of course, people can find their ways around paywalls. SciHub is a completely illegal but extremely convenient means of obtaining academic research for free. (I am purely describing it, not advocating it.) You can find a free version of the article debunking race and IQ myths on ResearchGate, a site that has engaged in mass copyright infringement in order to make research accessible. Often, because journal publishers tightly control access to their copyrighted work in order to charge those exorbitant fees for PDFs, the versions of articles that you can get for free are drafts that have not yet gone through peer review, and have thus been subjected to less scrutiny. This means that the more reliable an article is, the less accessible it is. On the other hand, pseudo-scholarhip is easy to find. Right-wing think tanks like the Cato Institute, the Foundation for Economic Education, the Hoover Institution, the Mackinac Center, the American Enterprise Institute, and the Heritage Foundation pump out slickly-produced policy documents on every subject under the sun. They are utterly untrustworthy—the conclusion is always going to be “let the free market handle the problem,” no matter what the problem or what the facts of the case. But it is often dressed up to look sober-minded and non-ideological. 

It’s not easy or cheap to be an “independent researcher.” When I was writing my first book, Superpredator, I wanted to look through newspaper, magazine, and journal archives to find everything I could about Bill Clinton’s record on race. I was lucky I had a university affiliation, because this gave me access to databases like LexisNexis. If I hadn’t, the cost of finding out what I wanted to find out would likely have run into the thousands of dollars.  

A problem beyond cost, though, is convenience. I find that even when I am doing research through databases and my university library, it is often an absolute mess: the sites are clunky and constantly demanding login credentials. The amount of time wasted in figuring out how to obtain a piece of research material is a massive cost on top of the actual pricing. The federal court document database, PACER, for instance, charges 10 cents a page for access to records, which adds up quickly since legal research often involves looking through thousands of pages. They offer an exemption if you are a researcher or can’t afford it, but to get the exemption you have to fill out a three page form and provide an explanation of both why you need each document and why you deserve the exemption. This is a waste of time that inhibits people’s productivity and limits their access to knowledge.

In fact, to see just how much human potential is being squandered by having knowledge dispensed by the “free market,” let us briefly picture what “totally democratic and accessible knowledge” would look like…(More)”.

How behavioural sciences can promote truth, autonomy and democratic discourse online


Philipp Lorenz-Spreen, Stephan Lewandowsky, Cass R. Sunstein & Ralph Hertwig in Nature: “Public opinion is shaped in significant part by online content, spread via social media and curated algorithmically. The current online ecosystem has been designed predominantly to capture user attention rather than to promote deliberate cognition and autonomous choice; information overload, finely tuned personalization and distorted social cues, in turn, pave the way for manipulation and the spread of false information. How can transparency and autonomy be promoted instead, thus fostering the positive potential of the web? Effective web governance informed by behavioural research is critically needed to empower individuals online. We identify technologically available yet largely untapped cues that can be harnessed to indicate the epistemic quality of online content, the factors underlying algorithmic decisions and the degree of consensus in online debates. We then map out two classes of behavioural interventions—nudging and boosting— that enlist these cues to redesign online environments for informed and autonomous choice….(More)”.

The Misinformation Edition


On-Line Exhibition by the Glass Room: “…In this exhibition – aimed at young people as well as adults – we explore how social media and the web have changed the way we read information and react to it. Learn why finding “fake news” is not as easy as it sounds, and how the term “fake news” is as much a problem as the news it describes. Dive into the world of deep fakes, which are now so realistic that they are virtually impossible to detect. And find out why social media platforms are designed to keep us hooked, and how they can be used to change our minds. You can also read our free Data Detox Kit, which reveals how to tell facts from fiction and why it benefits everyone around us when we take a little more care about what we share…(More)”.

EXPLORE OUR ONLINE EXHIBITION

Tackling the misinformation epidemic with “In Event of Moon Disaster”


MIT Open Learning: “Can you recognize a digitally manipulated video when you see one? It’s harder than most people realize. As the technology to produce realistic “deepfakes” becomes more easily available, distinguishing fact from fiction will only get more challenging. A new digital storytelling project from MIT’s Center for Advanced Virtuality aims to educate the public about the world of deepfakes with “In Event of Moon Disaster.”

This provocative website showcases a “complete” deepfake (manipulated audio and video) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be….

Alongside the film, moondisaster.org features an array of interactive and educational resources on deepfakes. Led by Panetta and Halsey Burgund, a fellow at MIT Open Documentary Lab, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource site where educators and media consumers can deepen their understanding of deepfakes: how they are made and how they work; their potential use and misuse; what is being done to combat deepfakes; and teaching and learning resources….(More)”.

Best Practices to Cover Ad Information Used for Research, Public Health, Law Enforcement & Other Uses


Press Release: “The Network Advertising Initiative (NAI) released privacy Best Practices for its members to follow if they use data collected for Tailored Advertising or Ad Delivery and Reporting for non-marketing purposes, such as sharing with research institutions, public health agencies, or law enforcement entities.

“Ad tech companies have data that can be a powerful resource for the public good if they follow this set of best practices for consumer privacy,” said Leigh Freund, NAI President and CEO. “During the COVID-19 pandemic, we’ve seen the opportunity for substantial public health benefits from sharing aggregate and de-identified location data.”

The NAI Code of Conduct – the industry’s premier self-regulatory framework for privacy, transparency, and consumer choice – covers data collected and used for Tailored Advertising or Ad Delivery and Reporting. The NAI Code has long addressed certain non-marketing uses of data collected for Tailored Advertising and Ad Delivery and Reporting by prohibiting any
eligibility uses of such data, including uses for credit, insurance, healthcare, and employment decisions.

The NAI has always firmly believed that data collected for advertising purposes should not have a negative effect on consumers in their daily lives. However, over the past year, novel data uses have been introduced, especially during the recent health crisis. In the case of opted-in data such as Precise Location Information, a company may determine a user would benefit from more detailed disclosure in a just-in-time notice about non-marketing uses of the data being collected….(More)”.

Disinformation Tracker


Press Release: “Today, Global Partners Digital (GPD), ARTICLE 19, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), PROTEGE QV and  the Centre for Human Rights of the University of Pretoria jointly launched an interactive map to track and analyse disinformation laws, policies and patterns of enforcement across Sub-Saharan Africa.

The map offers a birds-eye view of trends in state responses to disinformation across the region, as well as in-depth analysis of the state of play in individual countries, using a bespoke framework to assess whether laws, policies and other state responses are human rights-respecting. 

Developed against a backdrop of rapidly accelerating state action on COVID-19 related disinformation, the map is an open, iterative product. At the time of launch, it covers 31 countries (see below for the full list), with an aim to expand this in the coming months. All data, analysis and insight on the map has been generated by groups and actors based in Africa….(More)”.

Terms of Disservice: How Silicon Valley is Destructive by Design


Book by Dipayan Ghosh on “Designing a new digital social contact for our technological future…High technology presents a paradox. In just a few decades, it has transformed the world, making almost limitless quantities of information instantly available to billions of people and reshaping businesses, institutions, and even entire economies. But it also has come to rule our lives, addicting many of us to the march of megapixels across electronic screens both large and small.

Despite its undeniable value, technology is exacerbating deep social and political divisions in many societies. Elections influenced by fake news and unscrupulous hidden actors, the cyber-hacking of trusted national institutions, the vacuuming of private information by Silicon Valley behemoths, ongoing threats to vital infrastructure from terrorist groups and even foreign governments—all these concerns are now part of the daily news cycle and are certain to become increasingly serious into the future.

In this new world of endless technology, how can individuals, institutions, and governments harness its positive contributions while protecting each of us, no matter who or where we are?

In this book, a former Facebook public policy adviser who went on to assist President Obama in the White House offers practical ideas for using technology to create an open and accessible world that protects all consumers and civilians. As a computer scientist turned policymaker, Dipayan Ghosh answers the biggest questions about technology facing the world today. Proving clear and understandable explanations for complex issues, Terms of Disservice will guide industry leaders, policymakers, and the general public as we think about how we ensure that the Internet works for everyone, not just Silicon Valley….(More)”.