How to Run a Public Records Audit with a Team of Students


Article by By Lam Thuy Vo: “…The Markup (like many other organizations) uses public record requests as an important investigative tool, and we’ve published tips for fellow journalists on how to best craft their requests for specific investigations. But based on where government institutions are located, public record laws vary. Generally, government institutions are required to release documents to anyone who requests them, except when information falls under a specific exemption, like information that invades an individual’s privacy or if there are trade secrets. Federal institutions are governed by the Freedom of Information Act (FOIA), but state or local government agencies have their own state freedom of information laws, and they aren’t all identical. 

Public record audits take a step back. By sending the same freedom of information (FOI) request to agencies around the country, audits can help journalists, researchers and everyday people track which agency will release records and which may not, and if they’re complying with state laws. According to the national freedom of information coalition, “audits have led to legislative reforms and the establishment of ombudsman positions to represent the public’s interests.” 

The basics of auditing is simple: Send the same FOI request to different government agencies, document how you followed up, and document the outcome. Here’s how we coordinated this process with student reporters…(More)”.

Algorithmic attention rents: A theory of digital platform market power


Paper by Tim O’Reilly, Ilan Strauss and Mariana Mazzucato: “We outline a theory of algorithmic attention rents in digital aggregator platforms. We explore the way that as platforms grow, they become increasingly capable of extracting rents from a variety of actors in their ecosystems—users, suppliers, and advertisers—through their algorithmic control over user attention. We focus our analysis on advertising business models, in which attention harvested from users is monetized by reselling the attention to suppliers or other advertisers, though we believe the theory has relevance to other online business models as well. We argue that regulations should mandate the disclosure of the operating metrics that platforms use to allocate user attention and shape the “free” side of their marketplace, as well as details on how that attention is monetized…(More)”.

All the News That’s Fit to Click: How Metrics Are Transforming the Work of Journalists


Book by Caitlin Petre: “Journalists today are inundated with data about which stories attract the most clicks, likes, comments, and shares. These metrics influence what stories are written, how news is promoted, and even which journalists get hired and fired. Do metrics make journalists more accountable to the public? Or are these data tools the contemporary equivalent of a stopwatch wielded by a factory boss, worsening newsroom working conditions and journalism quality? In All the News That’s Fit to Click, Caitlin Petre takes readers behind the scenes at the New York TimesGawker, and the prominent news analytics company Chartbeat to explore how performance metrics are transforming the work of journalism.

Petre describes how digital metrics are a powerful but insidious new form of managerial surveillance and discipline. Real-time analytics tools are designed to win the trust and loyalty of wary journalists by mimicking key features of addictive games, including immersive displays, instant feedback, and constantly updated “scores” and rankings. Many journalists get hooked on metrics—and pressure themselves to work ever harder to boost their numbers.

Yet this is not a simple story of managerial domination. Contrary to the typical perception of metrics as inevitably disempowering, Petre shows how some journalists leverage metrics to their advantage, using them to advocate for their professional worth and autonomy…(More)”.

Mark the good stuff: Content provenance and the fight against disinformation


BBC Blog: “BBC News’s Verify team is a dedicated group of 60 journalists who fact-check, verify video, counter disinformation, analyse data and – crucially – explain complex stories in the pursuit of truth. On Monday, March 4th, Verify published their first article using a new open media provenance technology called C2PA. The C2PA standard is a technology that records digitally signed information about the provenance of imagery, video and audio – information (or signals) that shows where a piece of media has come from and how it’s been edited. Like an audit trail or a history, these signals are called ‘content credentials’.

Content credentials can be used to help audiences distinguish between authentic, trustworthy media and content that has been faked. The digital signature attached to the provenance information ensures that when the media is “validated”, the person or computer reading the image can be sure that it came from the BBC (or any other source with its own x.509 certificate).

This is important for two reasons. First, it gives publishers like the BBC the ability to share transparently with our audiences what we do every day to deliver great journalism. It also allows us to mark content that is shared across third party platforms (like Facebook) so audiences can trust that when they see a piece of BBC content it does in fact come from the BBC.

For the past three years, BBC R&D has been an active partner in the development of the C2PA standard. It has been developed in collaboration with major media and technology partners, including Microsoft, the New York Times and Adobe. Membership in C2PA is growing to include organisations from all over the world, from established hardware manufacturers like Canon, to technology leaders like OpenAI, fellow media organisations like NHK, and even the Publicis Group covering the advertising industry. Google has now joined the C2PA steering committee and social media companies are leaning in too: Meta has recently announced they are actively assessing implementing C2PA across their platforms…(More)”.

Avoiding the News


Book by Benjamin Toff, Ruth Palmer, and Rasmus Kleis Nielsen: “A small but growing number of people in many countries consistently avoid the news. They feel they do not have time for it, believe it is not worth the effort, find it irrelevant or emotionally draining, or do not trust the media, among other reasons. Why and how do people circumvent news? Which groups are more and less reluctant to follow the news? In what ways is news avoidance a problem—for individuals, for the news industry, for society—and how can it be addressed?

This groundbreaking book explains why and how so many people consume little or no news despite unprecedented abundance and ease of access. Drawing on interviews in Spain, the United Kingdom, and the United States as well as extensive survey data, Avoiding the News examines how people who tune out traditional media get information and explores their “folk theories” about how news organizations work. The authors argue that news avoidance is about not only content but also identity, ideologies, and infrastructures: who people are, what they believe, and how news does or does not fit into their everyday lives. Because news avoidance is most common among disadvantaged groups, it threatens to exacerbate existing inequalities by tilting mainstream journalism even further toward privileged audiences. Ultimately, this book shows, persuading news-averse audiences of the value of journalism is not simply a matter of adjusting coverage but requires a deeper, more empathetic understanding of people’s relationships with news across social, political, and technological boundaries…(More)”.

Forget technology — politicians pose the gravest misinformation threat


Article by Rasmus Nielsen: “This is set to be a big election year, including in India, Mexico, the US, and probably the UK. People will rightly be on their guard for misinformation, but much of the policy discussion on the topic ignores the most important source: members of the political elite.

As a social scientist working on political communication, I have spent years in these debates — which continue to be remarkably disconnected from what we know from research. Academic findings repeatedly underline the actual impact of politics, while policy documents focus persistently on the possible impact of new technologies.

Most recently, Britain’s National Cyber Security Centre (NCSC) has warned of how “AI-created hyper-realistic bots will make the spread of disinformation easier and the manipulation of media for use in deepfake campaigns will likely become more advanced”. This is similar to warnings from many other public authorities, which ignore the misinformation from the most senior levels of domestic politics. In the US, the Washington Post stopped counting after documenting at least 30,573 false or misleading claims made by Donald Trump as president. In the UK, the non-profit FullFact has reported that as many as 50 MPs — including two prime ministers, cabinet ministers and shadow cabinet ministers — failed to correct false, unevidenced or misleading claims in 2022 alone, despite repeated calls to do so.

These are actual problems of misinformation, and the phenomenon is not new. Both George W Bush and Barack Obama’s administrations obfuscated on Afghanistan. Bush’s government and that of his UK counterpart Tony Blair advanced false and misleading claims in the run-up to the Iraq war. Prominent politicians have, over the years, denied the reality of human-induced climate change, proposed quack remedies for Covid-19, and so much more. These are examples of misinformation, and, at their most egregious, of disinformation — defined as spreading false or misleading information for political advantage or profit.

This basic point is strikingly absent from many policy documents — the NCSC report, for example, has nothing to say about domestic politics. It is not alone. Take the US Surgeon General’s 2021 advisory on confronting health misinformation which calls for a “whole-of-society” approach — and yet contains nothing on politicians and curiously omits the many misleading claims made by the sitting president during the pandemic, including touting hydroxychloroquine as a potential treatment…(More)”.

AI and Democracy’s Digital Identity Crisis


Essay by Shrey Jain, Connor Spelliscy, Samuel Vance-Law and Scott Moore: “AI-enabled tools have become sophisticated enough to allow a small number of individuals to run disinformation campaigns of an unprecedented scale. Privacy-preserving identity attestations can drastically reduce instances of impersonation and make disinformation easy to identify and potentially hinder. By understanding how identity attestations are positioned across the spectrum of decentralization, we can gain a better understanding of the costs and benefits of various attestations. In this paper, we discuss attestation types, including governmental, biometric, federated, and web of trust-based, and include examples such as e-Estonia, China’s social credit system, Worldcoin, OAuth, X (formerly Twitter), Gitcoin Passport, and EAS. We believe that the most resilient systems create an identity that evolves and is connected to a network of similarly evolving identities that verify one another. In this type of system, each entity contributes its respective credibility to the attestation process, creating a larger, more comprehensive set of attestations. We believe these systems could be the best approach to authenticating identity and protecting against some of the threats to democracy that AI can pose in the hands of malicious actors. However, governments will likely attempt to mitigate these risks by implementing centralized identity authentication systems; these centralized systems could themselves pose risks to the democratic processes they are built to defend. We therefore recommend that policymakers support the development of standards-setting organizations for identity, provide legal clarity for builders of decentralized tooling, and fund research critical to effective identity authentication systems…(More)”

Networked Press Freedom


Book by Mike Ananny: “…offers a new way to think about freedom of the press in a time when media systems are in fundamental flux. Ananny challenges the idea that press freedom comes only from heroic, lone journalists who speak truth to power. Instead, drawing on journalism studies, institutional sociology, political theory, science and technology studies, and an analysis of ten years of journalism discourse about news and technology, he argues that press freedom emerges from social, technological, institutional, and normative forces that vie for power and fight for visions of democratic life. He shows how dominant, historical ideals of professionalized press freedom often mistook journalistic freedom from constraints for the public’s freedom to encounter the rich mix of people and ideas that self-governance requires. Ananny’s notion of press freedom ensures not only an individual right to speak, but also a public right to hear.

Seeing press freedom as essential for democratic self-governance, Ananny explores what publics need, what kind of free press they should demand, and how today’s press freedom emerges from intertwined collections of humans and machines. If someone says, “The public needs a free press,” Ananny urges us to ask in response, “What kind of public, what kind of freedom, and what kind of press?” Answering these questions shows what robust, self-governing publics need to demand of technologists and journalists alike…(More)”.

Disinformation and Civic Tech Research


Code for All Playbook: “”The Disinformation and Civic Tech Playbook is a tool for people who are interested in understanding how civic tech can help confront disinformation. This guide will help you successfully advocate for, and implement disinfo-fighting tools, programs, and campaigns from partners around the world.

In order to effectively fight misinformation at a societal scale, three stages of work must be completed in sequential order:

  1. Monitor or research media environment (traditional, social, and/or messaging apps) for misinformation
  2. Verify and/or debunk
  3. Reach people with the truth and counter-message falsehoods

These stages ascend from least impactful to most impactful activity.

Researching misinformation in the media environment has no effect whatsoever on its own. Verifying and debunking falsehoods have limited utility unless stage three is also achieved: successfully reaching communities with true information in a way that gets through to them, and effectively counter-messaging the misinformation that spreads so easily.

Unfortunately, the distribution of misinformation management projects to date seems to be the exact inverse of these stages. There has been an enormous amount of work to passively monitor and research media environments for misinformation. There is also a large amount of energy and resources dedicated to verifying and debunking misinformation through traditional fact-checking approaches. Whether because it’s the hardest one to solve or just third in the consecutive sequence, relatively few misinformation management projects have made it to the final stage of genuinely getting through to people and experimenting with effective counter-messaging and counter-engagement (see The Sentinel Project interview for further discussion)…(More)”.

Experts: 90% of Online Content Will Be AI-Generated by 2026


Article by Maggie Harrison: “Don’t believe everything you see on the Internet” has been pretty standard advice for quite some time now. And according to a new report from European law enforcement group Europol, we have all the reason in the world to step up that vigilance.

“Experts estimate that as much as 90 percent of online content may be synthetically generated by 2026,” the report warned, adding that synthetic media “refers to media generated or manipulated using artificial intelligence.”

“In most cases, synthetic media is generated for gaming, to improve services or to improve the quality of life,” the report continued, “but the increase in synthetic media and improved technology has given rise to disinformation possibilities.”…

The report focused pretty heavily on disinformation, notably that driven by deepfake technology. But that 90 percent figure raises other questions, too — what do AI systems like Dall-E and GPT-3 mean for artists, writers, and other content-generating creators? And circling back to disinformation once more, what will the dissemination of information, not to mention the consumption of it, actually look like in an era driven by that degree of AI-generated digital stuff?…(More)’