Anthropic Updates Claude Policy to Give Users More Control Over Data Use.

Friday, Aug 29, 2025 3:22 pm ET1min read

Anthropic, an AI startup backed by Amazon and Google, is giving Claude users more control over their data use. Users can now choose whether their chats are fed into future model training, with a deadline to opt out set for September 28, 2025. The company is also extending data retention to five years for those who allow training, citing the importance of user participation in improving the AI's coding and reasoning capabilities.

Anthropic, an AI startup backed by Amazon and Google, has implemented significant changes to its data use policies, giving Claude users more control over their data. Effective September 28, 2025, users will have the option to opt out of having their chats fed into future model training. For those who consent, Anthropic will retain the data for up to five years, aiming to enhance the AI's coding and reasoning capabilities [2].

The new policy is part of Anthropic's enterprise-first strategy, which has positioned the company as a leader in the enterprise LLM market, securing 32% market share in 2025 [3]. The company's revenue surged to $4 billion in 2025, driven by API partnerships and dominance in code generation and government contracts [3]. Amazon alone is projected to generate $1.28 billion from Anthropic’s usage in 2025 [3].

The changes come as Anthropic faces legal challenges. A recent class action copyright lawsuit alleging the company pirated books to train its AI models has been settled, though the terms are yet to be finalized [1]. Despite these legal hurdles, Anthropic continues to innovate, ensuring its AI solutions remain compliant with privacy laws and regulatory standards.

Anthropic's focus on user consent and data privacy aligns with a broader vision to democratize access to AI while maintaining control over data pipelines and infrastructure. The company's strategy of monetizing data without compromising privacy may prove to be a sustainable path to dominance in the AI market.

References:
[1] https://www.breitbart.com/tech/2025/08/27/anthropic-settles-authors-ai-copyright-lawsuit-that-had-potential-for-1-trillion-in-damages/
[2] https://www.theregister.com/2025/08/28/anthropic_five_year_data_retention/
[3] https://www.ainvest.com/news/anthropic-privacy-ai-strategy-blueprint-data-monetization-market-dominance-2508/

Anthropic Updates Claude Policy to Give Users More Control Over Data Use.

Comments



Add a public comment...
No comments

No comments yet