Social platform X to penalise creators for unlabeled AI-generated war videos


Social platform X to penalise creators for unlabeled AI-generated war videos

SAN FRANCISCO/NEW YORK: Social media platform X has announced new measures to crack down on unlabeled artificial intelligence-generated videos depicting armed conflict, vowing to suspend creators from its revenue-sharing programme if they fail to properly disclose AI-generated content, the company said.

In a post on its platform, X said it will begin enforcing stricter content labelling requirements to help users distinguish between authentic footage and AI-generated media, especially war-related videos that could mislead audiences or contribute to misinformation.

Under the new policy, content creators who share AI-generated videos of armed conflict without clear disclosure will face consequences including temporary suspension from the platform’s monetisation and revenue-sharing programmes. X said it is updating its enforcement tools to identify and flag such content automatically, with the goal of improving transparency and reducing the spread of visual misinformation.

The move comes amid growing global concerns about deepfake and generative AI content being used to manipulate public perception, particularly in relation to active conflicts in the Middle East and elsewhere. X’s announcement follows similar warnings from other tech platforms urging creators to clearly label AI-generated media to maintain user trust and platform integrity.

According to X, the updated policy applies specifically to AI-generated videos that depict armed conflict, war scenes or military action. Creators must include an explicit disclosure tag if content is wholly or partially generated by artificial intelligence. Failure to comply can lead to removal from revenue programmes and additional platform penalties.

The decision has garnered mixed reactions from users and tech commentators. Supporters of the policy say the step is necessary to curb the spread of deceptive media that could inflame tensions or misinform the public about real-world events. Critics, however, argue that enforcement will be challenging and that distinguishing between AI and genuine user-generated content may lead to over-blocking or confusion among creators.

X’s parent company did not immediately respond to requests for further comment.

Why this matters

AI-generated media has become increasingly sophisticated, with tools capable of creating realistic video content that is difficult to distinguish from authentic footage. In conflict zones — where real images are already hard to verify — unlabeled AI content can mislead audiences, fuel propaganda, or distort public understanding of unfolding events.

By linking content disclosure to monetisation eligibility, X aims to incentivise responsible posting behaviour among creators. The revenue-sharing programme is a key source of earnings for many content producers on the platform, and suspension from it could impact their income.

Broader context

The announcement follows a series of global debates over the role of AI in media and information ecosystems. Policymakers in several countries have raised alarm over the potential misuse of AI to generate fake news, deepfakes and manipulated video content that could influence elections, public opinion and international relations.

X’s latest policy adjustment reflects a broader industry trend toward requiring clear AI content labelling — a move also echoed in guidelines from competitor platforms and regulatory proposals in parts of Europe and North America.

You May Also Like