Anthropic Shifts to User Data Training, Raises Privacy Concerns
8
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
While user awareness of AI data usage is growing, this shift towards direct incorporation of user conversations represents a substantial change, increasing the potential for bias and demanding greater user control – a relatively high impact despite existing media attention.
Article Summary
Anthropic is implementing a significant change in its approach to AI model training, moving from primarily relying on publicly available data to incorporating user-generated content, specifically new chat transcripts and coding sessions. This shift, effective September 28, 2025, applies to all tiers of Claude, including the free version. However, users must actively opt-out, as the default setting is to allow Anthropic to use their data for model improvement and training. Data retention is extended to five years for those who don’t choose to decline, raising significant privacy concerns. The update impacts all Claude subscriptions, excluding commercial tiers. Users can modify their preference during signup or through privacy settings, but past data already used for training remains part of the system’s knowledge base. Anthropic asserts it employs data filtering and obfuscation techniques to protect user privacy and doesn’t sell user data to third parties.Key Points
- Anthropic is transitioning to train its AI models on user chat transcripts and coding sessions.
- Users must actively opt-out if they don’t want their data used for training, with the default setting being ‘on’.
- Data retention policies will extend to five years for non-opting-out subscribers, a significant change in privacy control.

