X Cracks Down on AI-Generated Conflict Videos
5
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
Moderate media attention driven by the visible action from a major platform, but the policy itself represents a largely reactive measure with limited long-term impact on the broader AI misinformation landscape. The core issue – easily created, misleading content – remains largely unaddressed.
Article Summary
X (formerly Twitter) has announced a new policy targeting creators who use AI to generate and post videos of armed conflict without clearly disclosing that the content is artificial. Head of Product Nikita Bier stated that this move is crucial to ensure access to authentic information during times of war, noting the ease with which AI can create misleading content. The policy mandates a 90-day suspension from the Creator Revenue Sharing Program for violators, with permanent suspension for continued misuse. X will utilize AI detection tools and its Community Notes system to identify the offending content. This action primarily addresses concerns about the proliferation of misinformation fueled by generative AI, particularly in sensitive contexts like war coverage. While a step toward responsible AI usage, the policy's limited scope – excluding content outside of war and political misinformation – suggests it’s a reactive, rather than proactive, response.Key Points
- X is suspending creators from its Creator Revenue Sharing Program for 90 days if they post AI-generated videos of armed conflict without disclosure.
- The policy aims to combat the spread of misinformation created by AI, particularly during times of war.
- X will use AI detection tools and Community Notes to identify and remove violating content.

