Article by Valerie Wirtschafter: “On February 28, 2026, a joint U.S.-Israeli military campaign struck Iranian nuclear facilities, military infrastructure, and leadership targets in what was officially dubbed Operation Epic Fury. Social media quickly flooded with false footage of the conflict, including massive explosions in Tel Aviv, successful Iranian missile strikes on U.S. warships, and satellite imagery purporting to show damage to American military bases in the Gulf.
Some of this footage was recycled from unrelated conflicts, including in Ukraine, and even from video games. Yet some of it was entirely fabricated and created with now ubiquitous generative artificial intelligence (AI) tools that can produce even more realistic content at scale. Several observers of the space emphasized the unprecedented volume of AI-generated content and its increasing sophistication.
While much has been written about the potential for AI-generated imagery, videos, and audio to flood the information ecosystem and make it increasingly difficult to parse what is true, AI content has previously only made up a small portion of the misleading content circulating across the web. During 2024, which was deemed “the year of the elections,” AI-generated content—while present—did not derail electoral processes around the world. And in the early days of the Israel-Hamas war, AI content was again present, but it represented just a small fraction of the overall misleading claims and recycled imagery circulating online. Does the current ongoing conflict in Iran truly represent a significant leap in AI-generated imagery? And if so, what might explain such a meaningful shift?…(More)”.