Anthropic Shifts Data Training Policy, Users Must Opt-Out
8
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
The shift in Anthropic's data policy generates significant media attention due to the potential privacy concerns, but the real impact lies in the increased user awareness and the wider conversation about responsible AI development.
Article Summary
Anthropic, the creator of the Claude AI assistant, is dramatically altering its data training strategy. Starting September 28, 2025, all Claude users—including those on free, pro, and max tiers—will have their conversations and coding sessions utilized to train the company’s AI models. This includes ‘new or resumed chats and coding sessions’ and a data retention period of up to five years. Crucially, users will have to actively opt out by toggling a setting, which is automatically set to ‘On’ at the signup stage. While the company assures users that sensitive data will be filtered and obfuscated, and that their data will not be sold to third parties, the shift represents a significant change in user control over their data. The updates do not apply to Anthropic’s commercial tiers. This raises concerns about potential biases in the training data and the extent to which user conversations are being utilized without explicit consent. The system does not allow users to retroactively opt-out of data already used for training.Key Points
- Anthropic will begin training its AI models on user chat transcripts and coding sessions unless users opt out.
- Users will have a five-year data retention policy associated with their data, even if they opt out.
- The change applies to all Claude user tiers but excludes Anthropic’s commercial usage tiers.

