ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

Anthropic Shifts to User-Generated Data for AI Model Training – Opt-Out Required

AI Anthropic Claude Privacy Data Training Terms of Service User Data
August 28, 2025
Viqus Verdict Logo Viqus Verdict Logo 8
Data Driven Decisions
Media Hype 7/10
Real Impact 8/10

Article Summary

Anthropic, a leading AI developer, is implementing a significant shift in its approach to AI model training. Starting September 28, 2025, the company will begin training its Claude AI models on user-generated data, including fresh chat transcripts and coding sessions. This move represents a notable expansion beyond purely synthetic data and is designed to enhance the responsiveness and capabilities of its models. However, users are required to actively opt-out of this data collection process, adding a layer of complexity and concern around data privacy. The new data retention policy extends to five years for those who don’t choose to opt out, raising potential questions about long-term data storage and usage. Importantly, this change applies to all consumer subscription tiers of Claude, including Claude Free, Pro, and Max, but excludes Anthropic’s commercial tiers utilized by government, enterprise, and educational clients. Users can manage this setting via a pop-up notification, with the default setting being ‘On,’ and the ability to toggle this setting ‘Off’ anytime via their privacy settings. Despite assurances that sensitive data will be filtered and obfuscated, the reliance on user-generated content introduces inherent risks.

Key Points

  • Anthropic will now train its AI models on user chat transcripts and coding sessions, expanding beyond synthetic data.
  • Users must actively opt-out of this data collection, otherwise their conversations and coding sessions will be used for model training.
  • The data retention policy now extends to five years for opted-out users, raising concerns about long-term data storage and privacy.

Why It Matters

This news is critical for AI consumers and anyone concerned about data privacy. Anthropic's shift reflects a broader trend in the AI industry – moving toward more real-world data for training, but also underscores the increased responsibility for developers to transparently manage user data. The requirement to actively opt-out means users must carefully consider the implications of their use of Claude, and the potential for their data to be used in ways they might not fully understand. This move further complicates the debate around AI ethics and the balance between innovation and user autonomy.

You might also be interested in