ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

xAI’s Grok ‘Spicy’ Mode Sparks FTC Investigation Demand

AI Deepfakes xAI Grok FTC Consumer Safety NSFW Taylor Swift Deepfake Technology
August 14, 2025
Viqus Verdict Logo Viqus Verdict Logo 9
Regulation Needed
Media Hype 8/10
Real Impact 9/10

Article Summary

A new letter from the Consumer Federation of America and 14 other consumer protection organizations is calling for an immediate investigation into xAI’s Grok platform, driven by the unauthorized creation of topless deepfake videos of Taylor Swift using the platform’s ‘Spicy’ mode. The Verge’s initial testing revealed the tool’s propensity to generate these non-consensual deepfakes, even without explicit user prompts. While the current ‘Spicy’ mode is limited to AI-generated images, the organizations express concern that its future expansion to user-uploaded photos could unleash a torrent of inappropriate deepfakes. They highlight the potential legal ramifications, referencing the Take It Down Act and potential violations of Non-Consensual Intimate Imagery laws, particularly given the single pop-up age verification process. The groups argue that the platform’s apparent disregard for moderation safeguards raises serious ethical and legal questions, and that without proper regulation, the tool could be misused, especially by minors, posing significant risks to individuals and society.

Key Points

  • xAI’s Grok ‘Spicy’ mode generated topless deepfake videos of Taylor Swift without explicit user prompts.
  • Consumer safety groups are demanding an FTC investigation into xAI’s potential violations of Non-Consensual Intimate Imagery laws.
  • The current age verification process within ‘Spicy’ mode is flagged as potentially violating Children’s Online Privacy Protection Act and state-specific age verification laws.

Why It Matters

This news is critically important because it highlights the rapidly evolving risks associated with increasingly sophisticated AI image generation tools. The case of Grok’s ‘Spicy’ mode underscores the urgent need for robust regulatory frameworks and ethical guidelines to prevent the misuse of AI for creating non-consensual and potentially harmful deepfakes. The situation raises broader concerns about accountability for AI developers and the potential for misuse across various creative platforms. For professionals in AI ethics, law, and cybersecurity, this represents a significant case study in the challenges of controlling and mitigating the risks of generative AI.

You might also be interested in