Following a report from Wired that X has been inundated with misinformation since the beginning of the Iran conflict, X announced that it’s revising its creator revenue share policies to prevent manipulation.
Nikita Bier, X’s head of product, announced in a March 3 post on X that the platform is specifically looking to address artificial intelligence-generated deepfakes related to the conflict in order to protect the integrity of the platform.
As per Bier: “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people. Starting now, users who post AI-generated videos of an armed conflict - without adding a disclosure that it was made with AI - will be suspended from Creator Revenue Sharing for 90 days.”
Bier said that further violations will result in a permanent suspension from the program.
“This will be flagged to us by any post with a Community Note or if the content contains metadata (or other signals) from generative AI tools,” Bier said.
Wired reported on Feb. 28 that hundreds of posts on X, some with millions of views, promoted misleading claims about the conflict.
As Wired explained: “In some cases, alleged video footage of the attack shared in posts on X are actually months or years old. In several posts, video footage of apparent attacks have been attributed to incorrect locations. A number of images shared on X appear to be altered or generated with AI. Other posts attempt to pass off video game footage as scenes from the conflict.”
And with X’s creator revenue share program incentivizing users to post content that drives more views and engagement, there’s a clear motivation for creators to share incendiary posts, which could be part of the reason X is seeing a big influx of misinformation.
Clearly, X agrees this is a problem, which is why it’s changing the rules of the initiative, though the key focus here is AI-generated content, not misinformation in general.
Will that help to address the spread of false claims, and fake footage from the battlefield? It’s hard to say, as there are other motivations for making false claims, because state-based actors also use X and other social platforms to sway perspectives.
But the announcement does show that X has recognized this as a problem and is working to address the spread of false reports about a major news event.
Though, in contrast, it is interesting to note that X hasn’t enacted the same level of enforcement for AI-generated fakes of other kinds. Like, say, images of people who’ve been undressed by its own Grok app.
But even so, it’s a positive sign that X is taking action to combat at least one form of AI-generated misinformation.



