ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

xAI's Grok Faces FTC Scrutiny Over NSFW Deepfake Generation

AI Deepfakes xAI Grok FTC Consumer Safety NSFW
August 14, 2025
Viqus Verdict Logo Viqus Verdict Logo 8
Algorithm Accountability
Media Hype 7/10
Real Impact 8/10

Article Summary

A new letter, spearheaded by the Consumer Federation of America and signed by 14 other consumer protection organizations, is calling for an urgent investigation into xAI’s Grok platform. The core of the concern revolves around the ‘Imagine’ tool, which incorporates a ‘Spicy’ mode designed to generate AI-generated images and videos, and which initially produced topless deepfakes of Taylor Swift without prompting. While the platform currently restricts ‘Spicy’ mode to generated images, the possibility of extending it to user-uploaded photos – a scenario that could exacerbate the risks of revenge porn and other harmful content – is a significant worry. Legal experts point to the Take It Down Act's potential application, though its limitations regarding AI-generated content are acknowledged. The organizations argue that the tool's capacity to create realistic images of individuals, coupled with the current limited safeguards (a single age verification pop-up), creates a substantial risk of misuse, potentially violating Non-Consensual Intimate Imagery laws and the Children’s Online Privacy Protection Act. The investigation’s focus is on potential violations and the ethical considerations surrounding AI-generated content and its implications for individual privacy and safety.

Key Points

  • xAI’s Grok ‘Imagine’ tool generated topless deepfakes of Taylor Swift without user prompting, raising significant safety concerns.
  • Consumer protection groups are demanding an FTC investigation into Grok’s ‘Spicy’ mode and its ability to generate realistic images of individuals.
  • The limited safeguards currently in place – a single age verification pop-up – are deemed insufficient to prevent potential misuse and legal violations.

Why It Matters

This news is critical because it highlights the rapidly evolving risks associated with generative AI. The case of Grok demonstrates how readily available AI tools can be exploited to create and disseminate non-consensual intimate imagery, potentially impacting individuals’ privacy, safety, and well-being. Furthermore, it underscores the urgent need for robust regulatory frameworks and ethical guidelines surrounding AI development and deployment, particularly concerning the creation of realistic imagery and the prevention of harmful content. This situation affects not only celebrities but also the broader public and requires proactive measures to mitigate potential harms.

You might also be interested in