Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

Sora 2's Dark Side: AI-Generated Fetish Content Fuels CSAM Concerns

AI Sora 2 TikTok Child Sexual Abuse Material OpenAI Fetish Content Online Safety
December 22, 2025
Source: Wired AI
Viqus Verdict Logo Viqus Verdict Logo 8
Reactive, Not Proactive
Media Hype 9/10
Real Impact 8/10

Article Summary

OpenAI’s Sora 2, released in September, is rapidly becoming a tool for generating highly problematic and potentially dangerous content: photorealistic fake commercials featuring AI-generated minors in unsettling and suggestive scenarios. While OpenAI has implemented measures like preventing young people’s faces from appearing in explicit deepfakes and banning CSAM outright, the ease with which creators are circumventing these restrictions is alarming. The proliferation of videos like the ‘Vibro Rose’ pen, parodying recalled toys with suggestive imagery and unsettling narratives, highlights a crucial issue: the intent behind the content, coupled with its presentation on platforms like TikTok, is driving the creation of CSAM. Concerns extend beyond simple pornography; videos parodizing real-life tragedies, featuring AI-generated minors engaging in suggestive activities, and exploiting popular internet memes—such as the ‘Incredible Gassy’ character—are contributing to the problem. The issue isn’t just the images themselves, but the deliberate creation of content with predatory intent. Recent reports highlight how creators are leveraging Sora to generate videos that subtly cater to a predatory audience, as evidenced by the ‘Coach inspects overweight young boys’ video, which has resulted in numerous requests to connect via Telegram, a platform frequently linked to child exploitation. The context of these videos—the accompanying comments, the deliberate placement on platforms like TikTok—is proving harder to control than the AI-generated imagery itself. This underscores a critical gap: current AI moderation strategies often lack the nuance required to detect and prevent the purposeful creation of CSAM and fetish content involving minors. The problem isn't just the AI's output, but the ability to create the imagery with clear intent.

Key Points

  • Sora 2’s photorealistic video generation capabilities are being exploited to create highly suggestive and potentially dangerous content featuring AI-generated minors.
  • Despite OpenAI's safeguards, the intent behind the content and its presentation on platforms like TikTok are facilitating the creation of AI-generated CSAM.
  • The problem isn't solely the imagery, but also the deliberate creation of content with predatory intent and its presentation on platforms that are easily accessed by dangerous individuals.

Why It Matters

This news is profoundly significant because it demonstrates the rapidly evolving dangers of advanced AI technology. While AI offers incredible potential, its misuse can have devastating consequences, particularly when it comes to exploiting vulnerable individuals. The fact that OpenAI’s safeguards are being circumvented highlights the urgent need for proactive measures, including more sophisticated content moderation techniques and potentially, industry-wide standards for AI development and deployment. This situation demands attention from policymakers, tech companies, and law enforcement, to address a threat that is evolving faster than existing regulatory frameworks can keep pace. The vulnerability of children online is a persistent issue, and AI’s capacity to amplify this risk is a new and serious concern.

You might also be interested in