Bringing Truth to the Internet


Article by Karen Kornbluh and Ellen P. Goodman: “The first volume of Special Counsel Robert Mueller’s report notes that “sweeping” and “systemic” social media disinformation was a key element of Russian interference in the 2016 election. No sooner were Mueller’s findings public than Twitter suspended a host of bots who had been promoting a “Russiagate hoax.”

Since at least 2016, conspiracy theories like Pizzagate and QAnon have flourished online and bled into mainstream debate. Earlier this year, a British member of Parliament called social media companies “accessories to radicalization” for their role in hosting and amplifying radical hate groups after the New Zealand mosque shooter cited and attempted to fuel more of these groups. In Myanmar, anti-Rohingya forces used Facebook to spread rumors that spurred ethnic cleansing, according to a UN special rapporteur. These platforms are vulnerable to those who aim to prey on intolerance, peer pressure, and social disaffection. Our democracies are being compromised. They work only if the information ecosystem has integrity—if it privileges truth and channels difference into nonviolent discourse. But the ecosystem is increasingly polluted.

Around the world, a growing sense of urgency about the need to address online radicalization is leading countries to embrace ever more draconian solutions: After the Easter bombings in Sri Lanka, the government shut down access to Facebook, WhatsApp, and other social media platforms. And a number of countries are considering adopting laws requiring social media companies to remove unlawful hate speech or face hefty penalties. According to Freedom House, “In the past year, at least 17 countries approved or proposed laws that would restrict online media in the name of fighting ‘fake news’ and online manipulation.”

The flaw with these censorious remedies is this: They focus on the content that the user sees—hate speech, violent videos, conspiracy theories—and not on the structural characteristics of social media design that create vulnerabilities. Content moderation requirements that cannot scale are not only doomed to be ineffective exercises in whack-a-mole, but they also create free expression concerns, by turning either governments or platforms into arbiters of acceptable speech. In some countries, such as Saudi Arabia, content moderation has become justification for shutting down dissident speech.

When countries pressure platforms to root out vaguely defined harmful content and disregard the design vulnerabilities that promote that content’s amplification, they are treating a symptom and ignoring the disease. The question isn’t “How do we moderate?” Instead, it is “How do we promote design change that optimizes for citizen control, transparency, and privacy online?”—exactly the values that the early Internet promised to embody….(More)”.