ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

Sora 2 Unveiled: Enhanced Safety and User Controls

Video Generation AI Safety Content Moderation User Control Consent Generative AI Sora
March 23, 2026
Source: OpenAI News
Viqus Verdict Logo Viqus Verdict Logo 7
Controlled Evolution
Media Hype 8/10
Real Impact 7/10

Article Summary

OpenAI has released Sora 2, building on the initial Sora model with a significantly increased emphasis on safety and user control. The core of the update revolves around improved provenance tracking, with every generated video now incorporating both visible and invisible signals indicating its origin. This system, mirroring the approach used in ChatGPT image generation, utilizes C2PA metadata and internal reverse-image/audio search tools for robust tracing. Notably, the ability to generate videos from photos of people—previously a key differentiator—has been subject to stricter controls, requiring explicit consent and utilizing even more stringent guardrails than the ‘Sora Characters’ feature. The platform includes dedicated safeguards for teens, including a filtered feed and limitations on continuous scrolling, alongside restrictions on adult content. Furthermore, OpenAI has implemented layered defenses against harmful content generation, scanning prompts and outputs across multiple frames and audio transcripts. Crucially, the company is actively employing ‘red teaming’ strategies to identify and mitigate novel risks, and has committed to continuously updating policies and systems. User control remains paramount, with options to report abuse, block accounts, and manage sharing settings. This release underscores OpenAI’s commitment to responsible AI development and broadens the functionality of the platform.

Key Points

  • Sora 2 introduces enhanced provenance tracking through visible and invisible signals for every generated video.
  • Stricter consent requirements and guardrails are now in place for generating videos from photos of people, addressing previous concerns.
  • Dedicated safeguards for teen users include a filtered feed, limitations on content, and restrictions on adult interaction.

Why It Matters

While the foundational Sora technology remains consistent, this release marks a critical step towards broader adoption and responsible use. The heightened focus on safety—particularly around person likeness generation—is strategically important. Addressing these concerns is essential for OpenAI to gain regulatory approval and ultimately commercialize Sora for wider applications. The company’s proactive approach to ‘red teaming’ and continuous policy updates demonstrates a genuine commitment to mitigating potential harms associated with increasingly realistic generative models. This is more than just incremental improvement; it’s a necessary evolution for a technology with the potential for both enormous creative opportunities and significant societal risks.

You might also be interested in