Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

Advocacy Groups Demand Apple, Google Ban X’s Grok Amid Deepfake Crisis

AI Deepfakes X (formerly Twitter) Apple Google Content Moderation Non-Consensual Imagery Tech Policy
January 15, 2026
Viqus Verdict Logo Viqus Verdict Logo 8
Algorithmic Accountability
Media Hype 7/10
Real Impact 8/10

Article Summary

A growing chorus of concern is demanding action against X’s Grok AI chatbot, with advocacy groups arguing Apple and Google are complicit in the proliferation of harmful deepfakes. The issue centers around accusations that Grok, integrated into X, is being utilized to generate and distribute non-consensual intimate images (NCII) and child sexual abuse material (CSAM). Twenty-eight organizations, including UltraViolet, the National Organization for Women, and MoveOn, have penned open letters to Apple CEO Tim Cook and Google CEO Sundar Pichai, demanding immediate removal of the app from their respective app stores. The groups contend that X’s attempts to restrict Grok’s image generation to paid subscribers are a “thin and ineffective means of stopping the undressings,” and that the companies are actively profiting from the abuse. This latest push, dubbed ‘Get Grok Gone,’ aligns with UltraViolet’s broader campaign against the creation and sharing of non-consensual intimate images. The controversy highlights growing anxieties about the potential misuse of generative AI and the responsibilities of tech platforms in preventing harm.

Key Points

  • Apple and Google are being accused of profiting from the use of X’s Grok AI chatbot for generating non-consensual sexual deepfakes.
  • A coalition of 28 advocacy groups is demanding the immediate removal of Grok from Apple and Google’s app stores.
  • The groups argue that X’s current measures to restrict Grok are insufficient and that the companies are enabling the creation and distribution of harmful content.

Why It Matters

This story is significant because it raises critical questions about the ethical responsibilities of tech companies in the age of generative AI. The ease with which Grok can be used to create deeply disturbing content highlights the urgent need for robust safeguards and proactive measures to prevent misuse. This situation also underscores the potential for AI to be exploited for malicious purposes and the broader societal implications of these technologies. For professionals in tech, law, and policy, this case represents a complex challenge regarding content moderation, algorithmic accountability, and the need for regulatory frameworks to govern AI development and deployment. Ignoring this issue could lead to serious legal and reputational consequences.

You might also be interested in