Fake explosions, fake missiles, fake troops: AI videos and images of Iran war spread widely on social media
By Daniel Dale, CNN
(CNN) — After Russia invaded Ukraine in 2022, social media was littered with crude fakes that were presented as fresh images of the war but were either photoshopped phonies or mislabeled clips taken from video games, movies, past incidents and unrelated news coverage.
Those kinds of old-fashioned fakes are now spreading again during the war against Iran. This time, they have been joined by a form of deception that wasn’t readily available in 2022: high-quality videos and still images that have been custom-created with easy-to-use artificial intelligence tools.
Ten years ago, said Hany Farid, a University of California, Berkeley, professor specializing in digital forensics, “there’d be like one or two fake things out there; they’d get debunked pretty fast. … Now you see hundreds of them, and they’re really realistic.” Farid added: “It’s not just realistic, it’s landing — it’s landing hard. People believe it and they’re amplifying it.”
“What has changed in the last year or so is that generative AI has become much more widely accessible,” said BBC Verify senior journalist Shayan Sardarizadeh, a prominent debunker of war-related fakes, “and it’s now possible to create very believable videos and images appearing to show a significant war incident that is hard to detect to the untrained or naked eye.”
Fake videos and images that experts like Sardarizadeh have identified as AI-created have racked up tens of millions of views on social media platforms in the nearly two weeks since the Iran war began.
One fake video shows a fictional barrage of Iranian missiles supposedly striking Tel Aviv, Israel. A second fake video depicts panicked people fleeing a supposed Iranian attack on an airport in Tel Aviv. A third fake video purports to show captured US special forces personnel being held at gunpoint by Iranian troops.
Another fake video claims to show clips from security camera footage of Iranian military facilities being blown up; three of the clips appear to be AI, while one is real but from last year. Yet another fake video depicts an imaginary convoy of US troops on the ground in Iran. One more fake looks like footage of a downed US plane being paraded through Tehran.
Phony still images that appear AI-created, meanwhile, claim to depict a US military base in Iraq and the US Embassy in Saudi Arabia burning after Iranian attacks; Iranian Supreme Leader Ali Khamenei lying dead under rubble; and Iranians mourning dead civilians. A publication linked to the Iranian government even posted a fake satellite image purporting to show damage to a US military base in Bahrain.
And that’s just a tiny sample of the Iran-related fakes in circulation.
Better fakes, less moderation
Despite daily debunking efforts from people like Sardarizadeh, new fakes are popping up far faster than they can be swatted down. They’re often lifelike enough that the average person scrolling through their feed can’t quickly spot that they’re phony.
Several fakes that have spread widely have been pushed as propaganda by pro-Iran social media accounts. The motivation behind the creation of many of the fakes, though, is hard to identify — perhaps social media views and the influence and money they can sometimes lead to, perhaps just because people were able to make them easily.
The increasingly sophisticated trickery is being tossed into a difficult environment for the truth. Partisan polarization, media fragmentation and the rise of social media algorithms mean that many Americans tend to primarily see material shared by like-minded people. And Farid noted that social media companies have turned away from aggressive moderation of the content on their platforms.
“The content is more realistic, the volume is higher, the penetration is deeper — this is our new reality. And it’s really messy,” Farid said.
Social media platform X did announce last week that it was taking some action to combat wartime AI fakes. Head of product Nikita Bier posted that if users who get paid by X as content “creators” spread AI-generated videos of armed conflicts without disclosing the videos were made with AI, they will be suspended from the payment program for 90 days and then permanently suspended if they commit additional violations.
Even if this policy is strictly enforced — Farid said he is skeptical — the overwhelming majority of X users are not part of the creator payment program. (Posts from other users are still subject to crowdsourced “community notes” fact-checking, but that has a spotty track record.) Social media companies TikTok and Meta, which owns Facebook and Instagram, did not respond to CNN requests for comment on the spread of fakes related to the war.
And Sardarizadeh has noted for months that X’s own AI chatbot, Grok, has actively made the problem worse in some cases — wrongly telling users seeking fact checks that numerous AI-created images and videos, including some depicting the Iran war, are real.
How to avoid being duped
In fairness, it’s hard these days to discern real from fake. Farid said the rapid improvement in the quality of AI creations means that tips from even months ago on how to spot AI fakery are not useful today. For example, it used to be helpful to check whether a person in an image had extra fingers or misplaced limbs; the humans represented in current AI content tend to be free of those types of comical errors.
Farid said the best way to remain accurately informed is to make a choice to get your news from credible journalistic outlets instead of scrolling through posts from “random accounts” on social media. “In moments of global conflict,” he said, “this is not a place to get information.”
For those of us who can’t avoid frequent scrolling, it’s wise to take a beat and do even a few seconds of online searching before believing or sharing a sensational wartime video or image.
Does anything seem off about it — audio out of sync with video, visual features that don’t match the real world? AI is getting better and better, but it’s still imperfect. (And some AI creations still have watermarks identifying the software that made them.)
Has a well-known debunker like Sardarizadeh, a fact-checking media outlet or a subject-matter expert addressed the veracity of the video or image? (If it’s fake, some professional has often pointed that out before it reaches your feed.)
Are there people raising skepticism in the replies to a post or X’s community notes? (Average users can deceive, but they can also ask good questions.)
And what do free AI-detection tools say? (They’re far from perfect, but they too can sometimes help.)
Sardarizadeh said we should be “training our eyes” to recognize AI material as best as possible. But he also said, “It is becoming extremely difficult to detect AI-generated content, and the trajectory appears to be heading in the direction of it becoming even more difficult soon.”
The-CNN-Wire
™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.