Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact
Back to all news ETHICS & SOCIETY

Anthropic Shifts to User-Generated Training Data, Raises Privacy Concerns

AI Anthropic Claude Data Privacy Terms of Service User Data Opt-Out
August 28, 2025
Viqus Verdict Logo Viqus Verdict Logo 8
Data Dependence: A Growing Risk
Media Hype 6/10
Real Impact 8/10

Article Summary

Anthropic, the AI firm behind the Claude chatbot, is implementing a significant change to its AI model training methodology. Starting September 28, 2025, the company will begin training its Claude AI models directly on user-generated data, including new chat transcripts and coding sessions. This represents a move away from solely relying on publicly available datasets. Crucially, users have the option to opt out of this data collection, but the default setting is ‘on’. The change extends Anthropic's data retention policy to five years, allowing for more comprehensive model training. This development introduces heightened privacy concerns, particularly regarding the potential misuse or analysis of sensitive user conversations. The update applies to all consumer subscription tiers – Claude Free, Pro, and Max – and also includes Claude Code usage via Amazon Bedrock and Google Cloud's Vertex AI. However, it does *not* affect Anthropic’s commercial usage tiers. Users can adjust their preferences via a pop-up notification, but changes only apply to future data, not past sessions. Anthropic emphasizes data filtering and obfuscation techniques to protect user privacy, but the fundamental shift towards user-generated data remains a key development.

Key Points

  • Anthropic will begin training its Claude AI models on user chat transcripts and coding sessions by default.
  • Users have the option to opt out of this data collection, but the default setting is ‘on’ and data will be retained for five years.
  • This shift raises significant privacy concerns given the potential for sensitive user data to be used in model training.

Why It Matters

This news matters because it represents a growing trend in AI development – relying on user data for model training. While offering the potential for more powerful and personalized AI, it simultaneously amplifies existing concerns around data privacy, algorithmic bias, and the potential for misuse of personal conversations. For professionals in AI ethics, data science, and product development, this highlights the critical need for transparency, robust consent mechanisms, and continuous monitoring of AI systems to mitigate these risks. The move demands careful consideration of user rights and the responsible development of increasingly sophisticated AI technologies.

You might also be interested in