Forget technology — politicians pose the gravest misinformation threat


Article by Rasmus Nielsen: “This is set to be a big election year, including in India, Mexico, the US, and probably the UK. People will rightly be on their guard for misinformation, but much of the policy discussion on the topic ignores the most important source: members of the political elite.

As a social scientist working on political communication, I have spent years in these debates — which continue to be remarkably disconnected from what we know from research. Academic findings repeatedly underline the actual impact of politics, while policy documents focus persistently on the possible impact of new technologies.

Most recently, Britain’s National Cyber Security Centre (NCSC) has warned of how “AI-created hyper-realistic bots will make the spread of disinformation easier and the manipulation of media for use in deepfake campaigns will likely become more advanced”. This is similar to warnings from many other public authorities, which ignore the misinformation from the most senior levels of domestic politics. In the US, the Washington Post stopped counting after documenting at least 30,573 false or misleading claims made by Donald Trump as president. In the UK, the non-profit FullFact has reported that as many as 50 MPs — including two prime ministers, cabinet ministers and shadow cabinet ministers — failed to correct false, unevidenced or misleading claims in 2022 alone, despite repeated calls to do so.

These are actual problems of misinformation, and the phenomenon is not new. Both George W Bush and Barack Obama’s administrations obfuscated on Afghanistan. Bush’s government and that of his UK counterpart Tony Blair advanced false and misleading claims in the run-up to the Iraq war. Prominent politicians have, over the years, denied the reality of human-induced climate change, proposed quack remedies for Covid-19, and so much more. These are examples of misinformation, and, at their most egregious, of disinformation — defined as spreading false or misleading information for political advantage or profit.

This basic point is strikingly absent from many policy documents — the NCSC report, for example, has nothing to say about domestic politics. It is not alone. Take the US Surgeon General’s 2021 advisory on confronting health misinformation which calls for a “whole-of-society” approach — and yet contains nothing on politicians and curiously omits the many misleading claims made by the sitting president during the pandemic, including touting hydroxychloroquine as a potential treatment…(More)”.

AI and Democracy’s Digital Identity Crisis


Essay by Shrey Jain, Connor Spelliscy, Samuel Vance-Law and Scott Moore: “AI-enabled tools have become sophisticated enough to allow a small number of individuals to run disinformation campaigns of an unprecedented scale. Privacy-preserving identity attestations can drastically reduce instances of impersonation and make disinformation easy to identify and potentially hinder. By understanding how identity attestations are positioned across the spectrum of decentralization, we can gain a better understanding of the costs and benefits of various attestations. In this paper, we discuss attestation types, including governmental, biometric, federated, and web of trust-based, and include examples such as e-Estonia, China’s social credit system, Worldcoin, OAuth, X (formerly Twitter), Gitcoin Passport, and EAS. We believe that the most resilient systems create an identity that evolves and is connected to a network of similarly evolving identities that verify one another. In this type of system, each entity contributes its respective credibility to the attestation process, creating a larger, more comprehensive set of attestations. We believe these systems could be the best approach to authenticating identity and protecting against some of the threats to democracy that AI can pose in the hands of malicious actors. However, governments will likely attempt to mitigate these risks by implementing centralized identity authentication systems; these centralized systems could themselves pose risks to the democratic processes they are built to defend. We therefore recommend that policymakers support the development of standards-setting organizations for identity, provide legal clarity for builders of decentralized tooling, and fund research critical to effective identity authentication systems…(More)”

Networked Press Freedom


Book by Mike Ananny: “…offers a new way to think about freedom of the press in a time when media systems are in fundamental flux. Ananny challenges the idea that press freedom comes only from heroic, lone journalists who speak truth to power. Instead, drawing on journalism studies, institutional sociology, political theory, science and technology studies, and an analysis of ten years of journalism discourse about news and technology, he argues that press freedom emerges from social, technological, institutional, and normative forces that vie for power and fight for visions of democratic life. He shows how dominant, historical ideals of professionalized press freedom often mistook journalistic freedom from constraints for the public’s freedom to encounter the rich mix of people and ideas that self-governance requires. Ananny’s notion of press freedom ensures not only an individual right to speak, but also a public right to hear.

Seeing press freedom as essential for democratic self-governance, Ananny explores what publics need, what kind of free press they should demand, and how today’s press freedom emerges from intertwined collections of humans and machines. If someone says, “The public needs a free press,” Ananny urges us to ask in response, “What kind of public, what kind of freedom, and what kind of press?” Answering these questions shows what robust, self-governing publics need to demand of technologists and journalists alike…(More)”.

Disinformation and Civic Tech Research


Code for All Playbook: “”The Disinformation and Civic Tech Playbook is a tool for people who are interested in understanding how civic tech can help confront disinformation. This guide will help you successfully advocate for, and implement disinfo-fighting tools, programs, and campaigns from partners around the world.

In order to effectively fight misinformation at a societal scale, three stages of work must be completed in sequential order:

  1. Monitor or research media environment (traditional, social, and/or messaging apps) for misinformation
  2. Verify and/or debunk
  3. Reach people with the truth and counter-message falsehoods

These stages ascend from least impactful to most impactful activity.

Researching misinformation in the media environment has no effect whatsoever on its own. Verifying and debunking falsehoods have limited utility unless stage three is also achieved: successfully reaching communities with true information in a way that gets through to them, and effectively counter-messaging the misinformation that spreads so easily.

Unfortunately, the distribution of misinformation management projects to date seems to be the exact inverse of these stages. There has been an enormous amount of work to passively monitor and research media environments for misinformation. There is also a large amount of energy and resources dedicated to verifying and debunking misinformation through traditional fact-checking approaches. Whether because it’s the hardest one to solve or just third in the consecutive sequence, relatively few misinformation management projects have made it to the final stage of genuinely getting through to people and experimenting with effective counter-messaging and counter-engagement (see The Sentinel Project interview for further discussion)…(More)”.

Experts: 90% of Online Content Will Be AI-Generated by 2026


Article by Maggie Harrison: “Don’t believe everything you see on the Internet” has been pretty standard advice for quite some time now. And according to a new report from European law enforcement group Europol, we have all the reason in the world to step up that vigilance.

“Experts estimate that as much as 90 percent of online content may be synthetically generated by 2026,” the report warned, adding that synthetic media “refers to media generated or manipulated using artificial intelligence.”

“In most cases, synthetic media is generated for gaming, to improve services or to improve the quality of life,” the report continued, “but the increase in synthetic media and improved technology has given rise to disinformation possibilities.”…

The report focused pretty heavily on disinformation, notably that driven by deepfake technology. But that 90 percent figure raises other questions, too — what do AI systems like Dall-E and GPT-3 mean for artists, writers, and other content-generating creators? And circling back to disinformation once more, what will the dissemination of information, not to mention the consumption of it, actually look like in an era driven by that degree of AI-generated digital stuff?…(More)’

What if You Knew What You Were Missing on Social Media?


Article by Julia Angwin: “Social media can feel like a giant newsstand, with more choices than any newsstand ever. It contains news not only from journalism outlets, but also from your grandma, your friends, celebrities and people in countries you have never visited. It is a bountiful feast.

But so often you don’t get to pick from the buffet. On most social media platforms, algorithms use your behavior to narrow in on the posts you are shown. If you send a celebrity’s post to a friend but breeze past your grandma’s, it may display more posts like the celebrity’s in your feed. Even when you choose which accounts to follow, the algorithm still decides which posts to show you and which to bury.

There are a lot of problems with this model. There is the possibility of being trapped in filter bubbles, where we see only news that confirms our existing beliefs. There are rabbit holes, where algorithms can push people toward more extreme content. And there are engagement-driven algorithms that often reward content that is outrageous or horrifying.

Yet not one of those problems is as damaging as the problem of who controls the algorithms. Never has the power to control public discourse been so completely in the hands of a few profit-seeking corporations with no requirements to serve the public good.

Elon Musk’s takeover of Twitter, which he renamed X, has shown what can happen when an individual pushes a political agenda by controlling a social media company.

Since Mr. Musk bought the platform, he has repeatedly declared that he wants to defeat the “woke mind virus” — which he has struggled to define but largely seems to mean Democratic and progressive policies. He has reinstated accounts that were banned because of the white supremacist and antisemitic views they espoused. He has banned journalists and activists. He has promoted far-right figures such as Tucker Carlson and Andrew Tate, who were kicked off other platforms. He has changed the rules so that users can pay to have some posts boosted by the algorithm, and has purportedly changed the algorithm to boost his own posts. The result, as Charlie Warzel said in The Atlantic, is that the platform is now a “far-right social network” that “advances the interests, prejudices and conspiracy theories of the right wing of American politics.”

The Twitter takeover has been a public reckoning with algorithmic control, but any tech company could do something similar. To prevent those who would hijack algorithms for power, we need a pro-choice movement for algorithms. We, the users, should be able to decide what we read at the newsstand…(More)”.

Journalism Is a Public Good and Should Be Publicly Funded


Essay by Patrick Walters: “News deserts” have proliferated across the U.S. Half of the nation’s more than 3,140 counties now have only one newspaper—and nearly 200 of them have no paper at all. Of the publications that survive, researchers have found many are “ghosts” of their former selves.

Journalism has problems nationally: CNN announced hundreds of layoffs at the end of 2022, and National Geographic laid off the last of its staff writers this June. In the latter month the Los Angeles Times cut 13 percent of its newsroom staff. But the crisis is even more acute at the local level, with jobs in local news plunging from 71,000 in 2008 to 31,000 in 2020. Closures and cutbacks often leave people without reliable sources that can provide them with what the American Press Institute has described as “the information they need to make the best possible decisions about their daily lives.”

Americans need to understand that journalism is a vital public good—one that, like roads, bridges and schools, is worthy of taxpayer support. We are already seeing the disastrous effects of otherwise allowing news to disintegrate in the free market: namely, a steady supply of misinformation, often masquerading as legitimate news, and too many communities left without a quality source of local news. Former New York Times public editor Margaret Sullivan has a called this a “crisis of American democracy.”

The terms “crisis” and “collapse” have become nearly ubiquitous in the past decade when describing the state of American journalism, which has been based on a for-profit commercial model since the rise of the “penny press” in the 1830s. Now that commercial model has collapsed amid the near disappearance of print advertising. Digital ads have not come close to closing the gap because Google and other platforms have “hoovered up everything,” as Emily Bell, founding director of the Tow Center for Journalism at Columbia University, told the Nieman Journalism Lab in a 2018 interview. In June the newspaper chain Gannett sued Google’s parent company, alleging it has created an advertising monopoly that has devastated the news industry.

Other journalism models—including nonprofits such as MinnPost, collaborative efforts such Broke in Philly and citizen journalism—have had some success in fulfilling what Lewis Friedland of the University of Wisconsin–Madison called “critical community information needs” in a chapter of the 2016 book The Communication Crisis in America, and How to Fix It. Friedland classified those needs as falling in eight areas: emergencies and risks, health and welfare, education, transportation, economic opportunities, the environment, civic information and political information. Nevertheless, these models have proven incapable of fully filling the void, as shown by the dearth of quality information during the early years of the COVID pandemic. Scholar Michelle Ferrier and others have worked to bring attention to how news deserts leave many rural and urban areas “impoverished by the lack of fresh, daily local news and information,” as Ferrier wrote in a 2018 article. A recent study also found evidence that U.S. judicial districts with lower newspaper circulation were likely to see fewer public corruption prosecutions.

growing chorus of voices is now calling for government-funded journalism, a model that many in the profession have long seen as problematic…(More)”.

Wikipedia’s Moment of Truth


Article by Jon Gertner at the New York Times: “In early 2021, a Wikipedia editor peered into the future and saw what looked like a funnel cloud on the horizon: the rise of GPT-3, a precursor to the new chatbots from OpenAI. When this editor — a prolific Wikipedian who goes by the handle Barkeep49 on the site — gave the new technology a try, he could see that it was untrustworthy. The bot would readily mix fictional elements (a false name, a false academic citation) into otherwise factual and coherent answers. But he had no doubts about its potential. “I think A.I.’s day of writing a high-quality encyclopedia is coming sooner rather than later,” he wrote in “Death of Wikipedia,” an essay that he posted under his handle on Wikipedia itself. He speculated that a computerized model could, in time, displace his beloved website and its human editors, just as Wikipedia had supplanted the Encyclopaedia Britannica, which in 2012 announced it was discontinuing its print publication.

Recently, when I asked this editor — he asked me to withhold his name because Wikipedia editors can be the targets of abuse — if he still worried about his encyclopedia’s fate, he told me that the newer versions made him more convinced that ChatGPT was a threat. “It wouldn’t surprise me if things are fine for the next three years,” he said of Wikipedia, “and then, all of a sudden, in Year 4 or 5, things drop off a cliff.”..(More)”.

The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet



Book by Jeff Jarvis: “The age of print is a grand exception in history. For five centuries it fostered what some call print culture – a worldview shaped by the completeness, permanence, and authority of the printed word. As a technology, print at its birth was as disruptive as the digital migration of today. Now, as the internet ushers us past print culture, journalist Jeff Jarvis offers important lessons from the era we leave behind.

To understand our transition out of the Gutenberg Age, Jarvis first examines the transition into it. Tracking Western industrialized print to its origins, he explores its invention, spread, and evolution, as well as the bureaucracy and censorship that followed. He also reveals how print gave rise to the idea of the mass – mass media, mass market, mass culture, mass politics, and so on – that came to dominate the public sphere.

What can we glean from the captivating, profound, and challenging history of our devotion to print? Could it be that we are returning to a time before mass media, to a society built on conversation, and that we are relearning how to hold that conversation with ourselves? Brimming with broader implications for today’s debates over communication, authorship, and ownership, Jarvis’ exploration of print on a grand scale is also a complex, compelling history of technology and power…(More)”

Shallowfakes


Essay by James R. Ostrowski: “…This dystopian fantasy, we are told, is what the average social media feed looks like today: a war zone of high-tech disinformation operations, vying for your attention, your support, your compliance. Journalist Joseph Bernstein, in his 2021 Harper’s piece “Bad News,” attributes this perception of social media to “Big Disinfo” — a cartel of think tanks, academic institutions, and prestige media outlets that spend their days spilling barrels of ink into op-eds about foreign powers’ newest disinformation tactics. The technology’s specific impact is always vague, yet somehow devastating. Democracy is dying, shot in the chest by artificial intelligence.

The problem with Big Disinfo isn’t that disinformation campaigns aren’t happening but that claims of mind-warping, AI-enabled propaganda go largely unscrutinized and often amount to mere speculation. There is little systematic public information about the scale at which foreign governments use deepfakes, bot armies, or generative text in influence ops. What little we know is gleaned through irregular investigations or leaked documents. In lieu of data, Big Disinfo squints into the fog, crying “Bigfoot!” at every oak tree.

Any machine learning researcher will admit that there is a critical disconnect between what’s possible in the lab and what’s happening in the field. Take deepfakes. When the technology was first developed, public discourse was saturated with proclamations that it would slacken society’s grip on reality. A 2019 New York Times op-ed, indicative of the general sentiment of this time, was titled “Deepfakes Are Coming. We Can No Longer Believe What We See.” That same week, Politico sounded the alarm in its article “‘Nightmarish’: Lawmakers brace for swarm of 2020 deepfakes.” A Forbes article asked us to imagine a deepfake video of President Trump announcing a nuclear weapons launch against North Korea. These stories, like others in the genre, gloss over questions of practicality…(More)”.