The Recommendation on Information Integrity


OECD Recommendation: “…The digital transformation of societies has reshaped how people interact and engage with information. Advancements in digital technologies and novel forms of communication have changed the way information is produced, shared, and consumed, locally and globally and across all media. Technological changes and the critical importance of online information platforms offer unprecedented access to information, foster citizen engagement and connection, and allow for innovative news reporting. However, they can also provide a fertile ground for the rapid spread of false, altered, or misleading content. In addition, new generative AI tools have greatly reduced the barriers to creating and spreading content.

Promoting the availability and free flow of high-quality, evidence-based information is key to upholding individuals’ ability to seek and receive information and ideas of all kinds and to safeguarding freedom of opinion and expression. 

The volume of content to which citizens are exposed can obscure and saturate public debates and help widen societal divisions. In this context, the quality of civic discourse declines as evidence-based information, which helps people make sense of their social environment, becomes harder to find. This reality has acted as a catalyst for governments to explore more closely the roles they can play, keeping as a priority in our democracies the necessity that governments should not exercise control of the information ecosystem and that, on the contrary, they support an environment where a plurality of information sources, views, and opinions can thrive…Building on the detailed policy framework outlined in the OECD report Facts not Fakes: Tackling Disinformation, Strengthening Information Integrity, the Recommendation provides an ambitious and actionable international standard that will help governments develop a systemic approach to foster information integrity, relying on a multi-stakeholder approach…(More)”.

Synthetic Data, Synthetic Media, and Surveillance


Paper by Aaron Martin and Bryce Newell: “Public and scholarly interest in the related concepts of synthetic data and synthetic media has exploded in recent years. From issues raised by the generation of synthetic datasets to train machine learning models to the public-facing, consumer availability of artificial intelligence (AI) powered image manipulation and creation apps and the associated increase in synthetic (or “deepfake”) media, these technologies have shifted from being niche curiosities of the computer science community to become topics of significant public, corporate, and regulatory import. They are emblematic of a “data-generation revolution” (Gal and Lynskey 2024: 1091) that is already raising pressing questions for the academic surveillance studies community. Within surveillance studies scholarship, Fussey (2022: 348) has argued that synthetic media is one of several “issues of urgent societal and planetary concern” and that it has “arguably never been more important” for surveillance studies “researchers to understand these dynamics and complex processes, evidence their implications, and translate esoteric knowledge to produce meaningful analysis.” Yet, while fields adjacent to surveillance studies have begun to explore the ethical risks of synthetic data, we currently perceive a lack of attention to the surveillance implications of synthetic data and synthetic media in published literature within our field. In response, this Dialogue is designed to help promote thinking and discussion about the links and disconnections between synthetic data, synthetic media, and surveillance…(More)”

How The New York Times incorporates editorial judgment in algorithms to curate its home page


Article by Zhen Yang: “Whether on the web or the app, the home page of The New York Times is a crucial gateway, setting the stage for readers’ experiences and guiding them to the most important news of the day. The Times publishes over 250 stories daily, far more than the 50 to 60 stories that can be featured on the home page at a given time. Traditionally, editors have manually selected and programmed which stories appear, when and where, multiple times daily. This manual process presents challenges:

  • How can we provide readers a relevant, useful, and fresh experience each time they visit the home page?
  • How can we make our editorial curation process more efficient and scalable?
  • How do we maximize the reach of each story and expose more stories to our readers?

To address these challenges, the Times has been actively developing and testing editorially driven algorithms to assist in curating home page content. These algorithms are editorially driven in that a human editor’s judgment or input is incorporated into every aspect of the algorithm — including deciding where on the home page the stories are placed, informing the rankings, and potentially influencing and overriding algorithmic outputs when necessary. From the get-go, we’ve designed algorithmic programming to elevate human curation, not to replace it…

The Times began using algorithms for content recommendations in 2011 but only recently started applying them to home page modules. For years, we only had one algorithmically-powered module, “Smarter Living,” on the home page, and later, “Popular in The Times.” Both were positioned relatively low on the page.

Three years ago, the formation of a cross-functional team — including newsroom editors, product managers, data scientists, data analysts, and engineers — brought the momentum needed to advance our responsible use of algorithms. Today, nearly half of the home page is programmed with assistance from algorithms that help promote news, features, and sub-brand content, such as The Athletic and Wirecutter. Some of these modules, such as the features module located at the top right of the home page on the web version, are in highly visible locations. During major news moments, editors can also deploy algorithmic modules to display additional coverage to complement a main module of stories near the top of the page. (The topmost news package of Figure 1 is an example of this in action.)…(More)”

How is editorial judgment incorporated into algorithmic programming?

United Nations Global Principles for Information Integrity


United Nations: “Technological advances have revolutionized communications, connecting people on a previously unthinkable scale. They have supported communities in times of crisis, elevated marginalized voices and helped mobilize global movements for racial justice and gender equality.

Yet these same advances have enabled the spread of misinformation, disinformation and hate speech at an unprecedented volume, velocity and virality, risking the integrity of the information ecosystem.

New and escalating risks stemming from leaps in AI technologies have made strengthening information integrity one of the urgent tasks of our time.

This clear and present global threat demands coordinated international action.

The United Nations Global Principles for Information Integrity show us another future is possible…(More)”

Invisible Rulers: The People Who Turn Lies into Reality


Book by Renée DiResta: “…investigation into the way power and influence have been profoundly transformed reveals how a virtual rumor mill of niche propagandists increasingly shapes public opinion. While propagandists position themselves as trustworthy Davids, their reach, influence, and economics make them classic Goliaths—invisible rulers who create bespoke realities to revolutionize politics, culture, and society. Their work is driven by a simple maxim: if you make it trend, you make it true.
 
By revealing the machinery and dynamics of the interplay between influencers, algorithms, and online crowds, DiResta vividly illustrates the way propagandists deliberately undermine belief in the fundamental legitimacy of institutions that make society work. This alternate system for shaping public opinion, unexamined until now, is rewriting the relationship between the people and their government in profound ways. It has become a force so shockingly effective that its destructive power seems limitless. Scientific proof is powerless in front of it. Democratic validity is bulldozed by it. Leaders are humiliated by it. But they need not be.
 
With its deep insight into the power of propagandists to drive online crowds into battle—while bearing no responsibility for the consequences—Invisible Rulers not only predicts those consequences but offers ways for leaders to rapidly adapt and fight back…(More)”.

More Questions Than Flags: Reality Check on DSA’s Trusted Flaggers


Article by Ramsha Jahangir, Elodie Vialle and Dylan Moses: “It’s been 100 days since the Digital Services Act (DSA) came into effect, and many of us are still wondering how the Trusted Flagger mechanism is taking shape, particularly for civil society organizations (CSOs) that could be potential applicants.

With an emphasis on accountability and transparency, the DSA requires national coordinators to appoint Trusted Flaggers, who are designated entities whose requests to flag illegal content must be prioritized. “Notices submitted by Trusted Flaggers acting within their designated area of expertise . . . are given priority and are processed and decided upon without undue delay,” according to the DSA. Trusted flaggers can include non-governmental organizations, industry associations, private or semi-public bodies, and law enforcement agencies. For instance, a private company that focuses on finding CSAM or terrorist-type content, or tracking groups that traffic in that content, could be eligible for Trusted Flagger status under the DSA. To be appointed, entities need to meet certain criteria, including being independent, accurate, and objective.

Trusted escalation channels are a key mechanism for civil society organizations (CSOs) supporting vulnerable users, such as human rights defenders and journalists targeted by online attacks on social media, particularly in electoral contexts. However, existing channels could be much more efficient. The DSA is a unique opportunity to redesign these mechanisms for reporting illegal or harmful content at scale. They need to be rethought for CSOs that hope to become Trusted Flaggers. Platforms often require, for instance, content to be translated into English and context to be understood by English-speaking audiences (due mainly to the fact that the key decision-makers are based in the US), which creates an added burden for CSOs that are resource-strapped. The lack of transparency in the reporting process can be distressing for the victims for whom those CSOs advocate. The lack of timely response can lead to dramatic consequences for human rights defenders and information integrity. Several CSOs we spoke with were not even aware of these escalation channels – and platforms are not incentivized to promote mechanisms given the inability to vet, prioritize and resolve all potential issues sent to them….(More)”.

May Contain Lies: How Stories, Statistics, and Studies Exploit Our Biases


Book by Alex Edmans: “Our lives are minefields of misinformation. It ripples through our social media feeds, our daily headlines, and the pronouncements of politicians, executives, and authors. Stories, statistics, and studies are everywhere, allowing people to find evidence to support whatever position they want. Many of these sources are flawed, yet by playing on our emotions and preying on our biases, they can gain widespread acceptance, warp our views, and distort our decisions.

In this eye-opening book, renowned economist Alex Edmans teaches us how to separate fact from fiction. Using colorful examples—from a wellness guru’s tragic but fabricated backstory to the blunders that led to the Deepwater Horizon disaster to the diet that ensnared millions yet hastened its founder’s death—Edmans highlights the biases that cause us to mistake statements for facts, facts for data, data for evidence, and evidence for proof.

Armed with the knowledge of what to guard against, he then provides a practical guide to combat this tide of misinformation. Going beyond simply checking the facts and explaining individual statistics, Edmans explores the relationships between statistics—the science of cause and effect—ultimately training us to think smarter, sharper, and more critically. May Contain Lies is an essential read for anyone who wants to make better sense of the world and better decisions…(More)”.

The Poisoning of the American Mind


Book by Lawrence M. Eppard: “Humans are hard-wired to look for information that they agree with (regardless of the information’s veracity), avoid information that makes them uncomfortable (even if that information is true), and interpret information in a manner that is most favorable to their sense of self. The damage these cognitive tendencies cause to one’s perception of reality depends in part upon the information that a person surrounds himself/herself with. Unfortunately, in the U.S. today, both liberals and conservatives are regularly bombarded with misleading information as well as lies from people they believe to be trustworthy and authoritative sources. While there are several factors one could plausibly blame for this predicament, the decline in the quality of the sources of information that the right and left rely on over the last few decades plays a primary role. As a result of this decline, we are faced with an epistemic crisis that is poisoning the American mind and threatening our democracy. In his forthcoming book with Jacob L. Mackey, The Poisoning of the American Mind, Lawrence M. Eppard explores epistemic problems in both the right-wing and left-wing ideological silos in the U.S., including ideology presented as fact, misinformation, disinformation, and malinformation…(More)”.

First EU rulebook to protect media independence and pluralism enters into force


Press Release: “Today, the European Media Freedom Act, a new set of unprecedented rules to protect media independence and pluralism, enters into force.

This new legislation provides safeguards against political interference in editorial decisions and against surveillance of journalists. The Act guarantees that media can operate more easily in the internal market and online. Additionally, the regulation also aims to secure the independence and stable funding of public service media, as well as the transparency of both media ownership and allocation of state advertising.

Vice-President for Values and Transparency, Věra Jourová, said:

 “For the first time ever, the EU has a law to protect media freedom. The EU recognises that journalists play an essential role for democracy and should be protected. I call on Member States to implement the new rules as soon as possible.”

Commissioner for Internal Market, Thierry Breton, added:

“Media companies play a vital role in our democracies but are confronted with falling revenues, threats to media freedom and pluralism and a patchwork of different national rules. Thanks to the European Media Freedom Act, media companies will enjoy common safeguards at EU level to guarantee a plurality of voices and be able to better benefit from the opportunities of operating in our single market without any interference, be it private or public.”

Proposed by the Commission in September 2022, this Regulation puts in place several protections for the right to media plurality becoming applicable within 6 months. More details on the timeline for its application are available in this infographic. ..(More)”.

Debugging Tech Journalism


Essay by Timothy B. Lee: “A huge proportion of tech journalism is characterized by scandals, sensationalism, and shoddy research. Can we fix it?

In November, a few days after Sam Altman was fired — and then rehired — as CEO of OpenAI, Reuters reported on a letter that may have played a role in Altman’s ouster. Several staffers reportedly wrote to the board of directors warning about “a powerful artificial intelligence discovery that they said could threaten humanity.”

The discovery: an AI system called Q* that can solve grade-school math problems.

“Researchers consider math to be a frontier of generative AI development,” the Reuters journalists wrote. Large language models are “good at writing and language translation,” but “conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence.”

This was a bit of a head-scratcher. Computers have been able to perform arithmetic at superhuman levels for decades. The Q* project was reportedly focused on word problems, which have historically been harder than arithmetic for computers to solve. Still, it’s not obvious that solving them would unlock human-level intelligence.

The Reuters article left readers with a vague impression that Q could be a huge breakthrough in AI — one that might even “threaten humanity.” But it didn’t provide readers with the context to understand what Q actually was — or to evaluate whether feverish speculation about it was justified.

For example, the Reuters article didn’t mention research OpenAI published last May describing a technique for solving math problems by breaking them down into small steps. In a December article, I dug into this and other recent research to help to illuminate what OpenAI is likely working on: a framework that would enable AI systems to search through a large space of possible solutions to a problem…(More)”.