Networked Press Freedom


Book by Mike Ananny: “…offers a new way to think about freedom of the press in a time when media systems are in fundamental flux. Ananny challenges the idea that press freedom comes only from heroic, lone journalists who speak truth to power. Instead, drawing on journalism studies, institutional sociology, political theory, science and technology studies, and an analysis of ten years of journalism discourse about news and technology, he argues that press freedom emerges from social, technological, institutional, and normative forces that vie for power and fight for visions of democratic life. He shows how dominant, historical ideals of professionalized press freedom often mistook journalistic freedom from constraints for the public’s freedom to encounter the rich mix of people and ideas that self-governance requires. Ananny’s notion of press freedom ensures not only an individual right to speak, but also a public right to hear.

Seeing press freedom as essential for democratic self-governance, Ananny explores what publics need, what kind of free press they should demand, and how today’s press freedom emerges from intertwined collections of humans and machines. If someone says, “The public needs a free press,” Ananny urges us to ask in response, “What kind of public, what kind of freedom, and what kind of press?” Answering these questions shows what robust, self-governing publics need to demand of technologists and journalists alike…(More)”.

Disinformation and Civic Tech Research


Code for All Playbook: “”The Disinformation and Civic Tech Playbook is a tool for people who are interested in understanding how civic tech can help confront disinformation. This guide will help you successfully advocate for, and implement disinfo-fighting tools, programs, and campaigns from partners around the world.

In order to effectively fight misinformation at a societal scale, three stages of work must be completed in sequential order:

  1. Monitor or research media environment (traditional, social, and/or messaging apps) for misinformation
  2. Verify and/or debunk
  3. Reach people with the truth and counter-message falsehoods

These stages ascend from least impactful to most impactful activity.

Researching misinformation in the media environment has no effect whatsoever on its own. Verifying and debunking falsehoods have limited utility unless stage three is also achieved: successfully reaching communities with true information in a way that gets through to them, and effectively counter-messaging the misinformation that spreads so easily.

Unfortunately, the distribution of misinformation management projects to date seems to be the exact inverse of these stages. There has been an enormous amount of work to passively monitor and research media environments for misinformation. There is also a large amount of energy and resources dedicated to verifying and debunking misinformation through traditional fact-checking approaches. Whether because it’s the hardest one to solve or just third in the consecutive sequence, relatively few misinformation management projects have made it to the final stage of genuinely getting through to people and experimenting with effective counter-messaging and counter-engagement (see The Sentinel Project interview for further discussion)…(More)”.

Experts: 90% of Online Content Will Be AI-Generated by 2026


Article by Maggie Harrison: “Don’t believe everything you see on the Internet” has been pretty standard advice for quite some time now. And according to a new report from European law enforcement group Europol, we have all the reason in the world to step up that vigilance.

“Experts estimate that as much as 90 percent of online content may be synthetically generated by 2026,” the report warned, adding that synthetic media “refers to media generated or manipulated using artificial intelligence.”

“In most cases, synthetic media is generated for gaming, to improve services or to improve the quality of life,” the report continued, “but the increase in synthetic media and improved technology has given rise to disinformation possibilities.”…

The report focused pretty heavily on disinformation, notably that driven by deepfake technology. But that 90 percent figure raises other questions, too — what do AI systems like Dall-E and GPT-3 mean for artists, writers, and other content-generating creators? And circling back to disinformation once more, what will the dissemination of information, not to mention the consumption of it, actually look like in an era driven by that degree of AI-generated digital stuff?…(More)’

What if You Knew What You Were Missing on Social Media?


Article by Julia Angwin: “Social media can feel like a giant newsstand, with more choices than any newsstand ever. It contains news not only from journalism outlets, but also from your grandma, your friends, celebrities and people in countries you have never visited. It is a bountiful feast.

But so often you don’t get to pick from the buffet. On most social media platforms, algorithms use your behavior to narrow in on the posts you are shown. If you send a celebrity’s post to a friend but breeze past your grandma’s, it may display more posts like the celebrity’s in your feed. Even when you choose which accounts to follow, the algorithm still decides which posts to show you and which to bury.

There are a lot of problems with this model. There is the possibility of being trapped in filter bubbles, where we see only news that confirms our existing beliefs. There are rabbit holes, where algorithms can push people toward more extreme content. And there are engagement-driven algorithms that often reward content that is outrageous or horrifying.

Yet not one of those problems is as damaging as the problem of who controls the algorithms. Never has the power to control public discourse been so completely in the hands of a few profit-seeking corporations with no requirements to serve the public good.

Elon Musk’s takeover of Twitter, which he renamed X, has shown what can happen when an individual pushes a political agenda by controlling a social media company.

Since Mr. Musk bought the platform, he has repeatedly declared that he wants to defeat the “woke mind virus” — which he has struggled to define but largely seems to mean Democratic and progressive policies. He has reinstated accounts that were banned because of the white supremacist and antisemitic views they espoused. He has banned journalists and activists. He has promoted far-right figures such as Tucker Carlson and Andrew Tate, who were kicked off other platforms. He has changed the rules so that users can pay to have some posts boosted by the algorithm, and has purportedly changed the algorithm to boost his own posts. The result, as Charlie Warzel said in The Atlantic, is that the platform is now a “far-right social network” that “advances the interests, prejudices and conspiracy theories of the right wing of American politics.”

The Twitter takeover has been a public reckoning with algorithmic control, but any tech company could do something similar. To prevent those who would hijack algorithms for power, we need a pro-choice movement for algorithms. We, the users, should be able to decide what we read at the newsstand…(More)”.

Journalism Is a Public Good and Should Be Publicly Funded


Essay by Patrick Walters: “News deserts” have proliferated across the U.S. Half of the nation’s more than 3,140 counties now have only one newspaper—and nearly 200 of them have no paper at all. Of the publications that survive, researchers have found many are “ghosts” of their former selves.

Journalism has problems nationally: CNN announced hundreds of layoffs at the end of 2022, and National Geographic laid off the last of its staff writers this June. In the latter month the Los Angeles Times cut 13 percent of its newsroom staff. But the crisis is even more acute at the local level, with jobs in local news plunging from 71,000 in 2008 to 31,000 in 2020. Closures and cutbacks often leave people without reliable sources that can provide them with what the American Press Institute has described as “the information they need to make the best possible decisions about their daily lives.”

Americans need to understand that journalism is a vital public good—one that, like roads, bridges and schools, is worthy of taxpayer support. We are already seeing the disastrous effects of otherwise allowing news to disintegrate in the free market: namely, a steady supply of misinformation, often masquerading as legitimate news, and too many communities left without a quality source of local news. Former New York Times public editor Margaret Sullivan has a called this a “crisis of American democracy.”

The terms “crisis” and “collapse” have become nearly ubiquitous in the past decade when describing the state of American journalism, which has been based on a for-profit commercial model since the rise of the “penny press” in the 1830s. Now that commercial model has collapsed amid the near disappearance of print advertising. Digital ads have not come close to closing the gap because Google and other platforms have “hoovered up everything,” as Emily Bell, founding director of the Tow Center for Journalism at Columbia University, told the Nieman Journalism Lab in a 2018 interview. In June the newspaper chain Gannett sued Google’s parent company, alleging it has created an advertising monopoly that has devastated the news industry.

Other journalism models—including nonprofits such as MinnPost, collaborative efforts such Broke in Philly and citizen journalism—have had some success in fulfilling what Lewis Friedland of the University of Wisconsin–Madison called “critical community information needs” in a chapter of the 2016 book The Communication Crisis in America, and How to Fix It. Friedland classified those needs as falling in eight areas: emergencies and risks, health and welfare, education, transportation, economic opportunities, the environment, civic information and political information. Nevertheless, these models have proven incapable of fully filling the void, as shown by the dearth of quality information during the early years of the COVID pandemic. Scholar Michelle Ferrier and others have worked to bring attention to how news deserts leave many rural and urban areas “impoverished by the lack of fresh, daily local news and information,” as Ferrier wrote in a 2018 article. A recent study also found evidence that U.S. judicial districts with lower newspaper circulation were likely to see fewer public corruption prosecutions.

growing chorus of voices is now calling for government-funded journalism, a model that many in the profession have long seen as problematic…(More)”.

Wikipedia’s Moment of Truth


Article by Jon Gertner at the New York Times: “In early 2021, a Wikipedia editor peered into the future and saw what looked like a funnel cloud on the horizon: the rise of GPT-3, a precursor to the new chatbots from OpenAI. When this editor — a prolific Wikipedian who goes by the handle Barkeep49 on the site — gave the new technology a try, he could see that it was untrustworthy. The bot would readily mix fictional elements (a false name, a false academic citation) into otherwise factual and coherent answers. But he had no doubts about its potential. “I think A.I.’s day of writing a high-quality encyclopedia is coming sooner rather than later,” he wrote in “Death of Wikipedia,” an essay that he posted under his handle on Wikipedia itself. He speculated that a computerized model could, in time, displace his beloved website and its human editors, just as Wikipedia had supplanted the Encyclopaedia Britannica, which in 2012 announced it was discontinuing its print publication.

Recently, when I asked this editor — he asked me to withhold his name because Wikipedia editors can be the targets of abuse — if he still worried about his encyclopedia’s fate, he told me that the newer versions made him more convinced that ChatGPT was a threat. “It wouldn’t surprise me if things are fine for the next three years,” he said of Wikipedia, “and then, all of a sudden, in Year 4 or 5, things drop off a cliff.”..(More)”.

The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet



Book by Jeff Jarvis: “The age of print is a grand exception in history. For five centuries it fostered what some call print culture – a worldview shaped by the completeness, permanence, and authority of the printed word. As a technology, print at its birth was as disruptive as the digital migration of today. Now, as the internet ushers us past print culture, journalist Jeff Jarvis offers important lessons from the era we leave behind.

To understand our transition out of the Gutenberg Age, Jarvis first examines the transition into it. Tracking Western industrialized print to its origins, he explores its invention, spread, and evolution, as well as the bureaucracy and censorship that followed. He also reveals how print gave rise to the idea of the mass – mass media, mass market, mass culture, mass politics, and so on – that came to dominate the public sphere.

What can we glean from the captivating, profound, and challenging history of our devotion to print? Could it be that we are returning to a time before mass media, to a society built on conversation, and that we are relearning how to hold that conversation with ourselves? Brimming with broader implications for today’s debates over communication, authorship, and ownership, Jarvis’ exploration of print on a grand scale is also a complex, compelling history of technology and power…(More)”

Shallowfakes


Essay by James R. Ostrowski: “…This dystopian fantasy, we are told, is what the average social media feed looks like today: a war zone of high-tech disinformation operations, vying for your attention, your support, your compliance. Journalist Joseph Bernstein, in his 2021 Harper’s piece “Bad News,” attributes this perception of social media to “Big Disinfo” — a cartel of think tanks, academic institutions, and prestige media outlets that spend their days spilling barrels of ink into op-eds about foreign powers’ newest disinformation tactics. The technology’s specific impact is always vague, yet somehow devastating. Democracy is dying, shot in the chest by artificial intelligence.

The problem with Big Disinfo isn’t that disinformation campaigns aren’t happening but that claims of mind-warping, AI-enabled propaganda go largely unscrutinized and often amount to mere speculation. There is little systematic public information about the scale at which foreign governments use deepfakes, bot armies, or generative text in influence ops. What little we know is gleaned through irregular investigations or leaked documents. In lieu of data, Big Disinfo squints into the fog, crying “Bigfoot!” at every oak tree.

Any machine learning researcher will admit that there is a critical disconnect between what’s possible in the lab and what’s happening in the field. Take deepfakes. When the technology was first developed, public discourse was saturated with proclamations that it would slacken society’s grip on reality. A 2019 New York Times op-ed, indicative of the general sentiment of this time, was titled “Deepfakes Are Coming. We Can No Longer Believe What We See.” That same week, Politico sounded the alarm in its article “‘Nightmarish’: Lawmakers brace for swarm of 2020 deepfakes.” A Forbes article asked us to imagine a deepfake video of President Trump announcing a nuclear weapons launch against North Korea. These stories, like others in the genre, gloss over questions of practicality…(More)”.

AI Is Tearing Wikipedia Apart


Article by Claire Woodcock: “As generative artificial intelligence continues to permeate all aspects of culture, the people who steward Wikipedia are divided on how best to proceed. 

During a recent community call, it became apparent that there is a community split over whether or not to use large language models to generate content. While some people expressed that tools like Open AI’s ChatGPT could help with generating and summarizing articles, others remained wary. 

The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist. This often results in text summaries which seem accurate, but on closer inspection are revealed to be completely fabricated

“The risk for Wikipedia is people could be lowering the quality by throwing in stuff that they haven’t checked,” Bruckman added. “I don’t think there’s anything wrong with using it as a first draft, but every point has to be verified.” 

The Wikimedia Foundation, the nonprofit organization behind the website, is looking into building tools to make it easier for volunteers to identify bot-generated content. Meanwhile, Wikipedia is working to draft a policy that lays out the limits to how volunteers can use large language models to create content.

The current draft policy notes that anyone unfamiliar with the risks of large language models should avoid using them to create Wikipedia content, because it can open the Wikimedia Foundation up to libel suits and copyright violations—both of which the nonprofit gets protections from but the Wikipedia volunteers do not. These large language models also contain implicit biases, which often result in content skewed against marginalized and underrepresented groups of people

The community is also divided on whether large language models should be allowed to train on Wikipedia content. While open access is a cornerstone of Wikipedia’s design principles, some worry the unrestricted scraping of internet data allows AI companies like OpenAI to exploit the open web to create closed commercial datasets for their models. This is especially a problem if the Wikipedia content itself is AI-generated, creating a feedback loop of potentially biased information, if left unchecked…(More)”.

DMA: rules for digital gatekeepers to ensure open markets start to apply


Press Release: “The EU Digital Markets Act (DMA) applies from today. Now that the DMA applies, potential gatekeepers that meet the quantitative thresholds established have until 3 July to notify their core platform services to the Commission. ..

The DMA aims to ensure contestable and fair markets in the digital sector. It defines gatekeepers as those large online platforms that provide an important gateway between business users and consumers, whose position can grant them the power to act as a private rule maker, and thus create a bottleneck in the digital economy. To address these issues, the DMA defines a series of specific obligations that gatekeepers will need to respect, including prohibiting them from engaging in certain behaviours in a list of do’s and don’ts. More information is available in the dedicated Q&A…(More)”.