Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact
Back to all news ETHICS & SOCIETY

Anthropic Shifts to User Data Training, Raising Privacy Concerns

AI Anthropic Claude Data Privacy TechCrunch AI Training User Consent
August 28, 2025
Viqus Verdict Logo Viqus Verdict Logo 8
Data Transparency Deficit
Media Hype 7/10
Real Impact 8/10

Article Summary

Anthropic is dramatically changing its data policy, requiring all users of Claude (Free, Pro, and Max, including Claude Code) to explicitly consent to having their conversations and coding sessions used to train its AI models. This shift involves extending data retention to five years for non-opting-out users. The company is framing this as a user-driven improvement to model safety and performance, particularly in areas like coding, analysis, and reasoning. However, critics argue that the implementation – notably the prominent ‘Accept’ button paired with a buried toggle – creates a risk of users unknowingly agreeing to data sharing. This follows broader trends in the AI industry, with companies like OpenAI facing legal challenges and scrutiny over their data practices. Notably, Anthropic’s approach mirrors OpenAI’s current strategy and echoes concerns about user awareness and the potential for ‘surreptitiously changing terms of service,’ as cautioned by the Federal Trade Commission. The change underscores the difficulty of achieving meaningful user consent in the rapidly evolving world of AI.

Key Points

  • Anthropic is requiring all Claude users to explicitly consent to having their conversations and coding sessions used for AI model training.
  • Users who don't opt-out will have their data retained for up to five years, a significant increase from previous deletion policies.
  • The change is framed as improving model safety and performance, particularly in areas like coding and reasoning, but raises concerns about potential data privacy risks.

Why It Matters

This news is significant because it highlights the fundamental tension between AI development and user privacy. As large language models become increasingly sophisticated, they rely on massive datasets, often derived from user interactions. Anthropic’s shift underscores the challenges of obtaining genuine, informed consent in this context. The company’s implementation design – with the ‘Accept’ button and buried toggle – suggests a potential for ‘dark patterns’ that could lead users to unknowingly agree to data sharing. This news is important for anyone utilizing AI tools, as it further emphasizes the need for vigilance and a thorough understanding of the data policies governing these services. The broader implications for the AI industry and regulatory oversight are considerable.

You might also be interested in