Anthropic Shifts to User-Generated Data Training, Sparks Privacy Concerns
8
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
While the shift itself isn’t entirely novel, the prominent and immediate push for user agreement creates significant media attention and public concern, driving a potentially large impact on user perception and engagement with AI tools.
Article Summary
Anthropic, the creator of the Claude AI assistant, is implementing a significant change to its data training practices. Starting September 28th, 2025, the company will incorporate user-generated data, including fresh chat transcripts and coding sessions, into its AI model training. This shift relies on a new, prominent pop-up notification that users will be prompted to accept. Critically, users can choose to opt out, but the default setting is ‘on,’ raising immediate concerns about data privacy and potential algorithmic bias. The move extends Anthropic’s data retention policy to five years for opted-out users. This represents a departure from previous practices and highlights the increasing reliance of AI development on vast datasets of user interactions. The update applies to all Claude consumer subscription tiers, excluding Anthropic’s commercial offerings. Despite claims of data filtering and obfuscation, the core issue remains the collection and use of user data for AI training, prompting debate about transparency and control.Key Points
- Anthropic will train its Claude AI models on user chat transcripts and coding sessions unless users opt out.
- The data retention policy for opted-out users has been extended to five years.
- Users must make a decision by September 28th, 2025, regarding the use of their data for AI training.

