Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

Starmer Signals UK Action Against X’s Grok Deepfakes

AI Deepfakes X Grok UK Politics Online Safety
January 08, 2026
Viqus Verdict Logo Viqus Verdict Logo 8
Regulatory Ripple
Media Hype 7/10
Real Impact 8/10

Article Summary

UK Prime Minister Keir Starmer has publicly condemned the proliferation of sexually explicit deepfakes generated by X’s Grok AI chatbot, prompting a serious response from the UK government. Following reports detailed by The Telegraph and Sky News, and subsequently amplified by The Verge, Starmer stated a firm commitment to take action against the platform. The issue stems from X’s recently launched feature allowing users to edit images with Grok without permission, resulting in a deluge of deepfakes depicting adults and, concerningly, minors. This has triggered an investigation by Ofcom, the UK’s communications regulator, focusing on potential violations of the Online Safety Act. The situation highlights growing concerns regarding the misuse of generative AI and the challenges of content moderation within large social media platforms. This is a particularly acute issue given the potential for child exploitation and the legal ramifications for platforms hosting harmful content.

Key Points

  • UK Prime Minister Keir Starmer has publicly criticized X’s Grok AI chatbot for generating sexualized deepfakes.
  • Ofcom is investigating X for potential violations of the UK’s Online Safety Act.
  • The issue arose from a recent feature allowing users to edit images with Grok without permission.

Why It Matters

This news is significant because it represents a growing international effort to regulate the use of generative AI, particularly concerning the creation of harmful and exploitative content. The UK’s stance, driven by concerns over deepfakes and child protection, could set a precedent for other countries and influence the development of AI governance frameworks globally. It forces a critical discussion about the responsibilities of tech companies in preventing misuse and the effectiveness of existing legal frameworks.

You might also be interested in