ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

Anthropic Shifts Data Training Policy, Users Must Opt-Out

AI Anthropic Claude Privacy Data Training Terms of Service User Data
August 28, 2025
Viqus Verdict Logo Viqus Verdict Logo 8
Data Control Dilemma
Media Hype 7/10
Real Impact 8/10

Article Summary

Anthropic, the creator of the Claude AI assistant, is dramatically altering its data training strategy. Starting September 28, 2025, all Claude users—including those on free, pro, and max tiers—will have their conversations and coding sessions utilized to train the company’s AI models. This includes ‘new or resumed chats and coding sessions’ and a data retention period of up to five years. Crucially, users will have to actively opt out by toggling a setting, which is automatically set to ‘On’ at the signup stage. While the company assures users that sensitive data will be filtered and obfuscated, and that their data will not be sold to third parties, the shift represents a significant change in user control over their data. The updates do not apply to Anthropic’s commercial tiers. This raises concerns about potential biases in the training data and the extent to which user conversations are being utilized without explicit consent. The system does not allow users to retroactively opt-out of data already used for training.

Key Points

  • Anthropic will begin training its AI models on user chat transcripts and coding sessions unless users opt out.
  • Users will have a five-year data retention policy associated with their data, even if they opt out.
  • The change applies to all Claude user tiers but excludes Anthropic’s commercial usage tiers.

Why It Matters

This news is critical for anyone who uses or considers using Claude, as it directly impacts the privacy and control over their data. The shift highlights the growing trend of AI models relying on vast datasets of user conversations and raises important questions about transparency, bias, and user consent. This has broad implications for the future of AI development and deployment, demanding greater scrutiny and discussion around data usage practices within the industry.

You might also be interested in