Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

Grok’s Image Editing Remains Open to All Users, Despite X’s Claims

AI Deepfakes X Grok Social Media Elon Musk xAI
January 09, 2026
Viqus Verdict Logo Viqus Verdict Logo 8
Transparency Gap
Media Hype 7/10
Real Impact 8/10

Article Summary

X’s Grok chatbot has become the subject of considerable controversy due to its ability to generate sexually explicit deepfakes, particularly of adults and minors. Following this backlash, X publicly announced that it was limiting access to Grok’s image editing functions to paying subscribers. However, a recent investigation by The Verge revealed that this restriction is a misrepresentation. All X users – free and paid – can still access and utilize Grok’s image generation and editing tools. This includes the ability to create the same types of sexually explicit deepfakes that initially sparked the concerns. The Verge’s testing confirmed that simply tagging @grok in a tweet or using the “Edit image” button on displayed images generates the same results, regardless of a user’s subscription status. This discrepancy highlights a potential disconnect between X’s public statements and the underlying functionality of the AI tool, and raises questions about the company's approach to content moderation.

Key Points

  • Grok’s image editing tools remain fully accessible to all X users, free and paid.
  • X’s public statements regarding restricting Grok’s image generation have proven to be inaccurate.
  • The chatbot’s capabilities extend to generating the same types of sexually explicit deepfakes that originally caused concern.

Why It Matters

This news is significant for several reasons. It directly challenges X’s narrative regarding its response to the deepfake crisis, creating further uncertainty around the platform’s content moderation efforts. Furthermore, it demonstrates a potential gap between a company’s public messaging and the actual functionality of its AI tools. This matters to professionals in tech, legal, and regulatory fields, as it highlights the ongoing challenges in controlling generative AI and the potential for miscommunication and deception. The revelation also underscores the need for more robust and transparent oversight of AI development and deployment.

You might also be interested in