Anthropic Updates Claude Policy to Give Users More Control Over Data Use.
PorAinvest
viernes, 29 de agosto de 2025, 3:22 pm ET1 min de lectura
AMZN--
The new policy is part of Anthropic's enterprise-first strategy, which has positioned the company as a leader in the enterprise LLM market, securing 32% market share in 2025 [3]. The company's revenue surged to $4 billion in 2025, driven by API partnerships and dominance in code generation and government contracts [3]. Amazon alone is projected to generate $1.28 billion from Anthropic’s usage in 2025 [3].
The changes come as Anthropic faces legal challenges. A recent class action copyright lawsuit alleging the company pirated books to train its AI models has been settled, though the terms are yet to be finalized [1]. Despite these legal hurdles, Anthropic continues to innovate, ensuring its AI solutions remain compliant with privacy laws and regulatory standards.
Anthropic's focus on user consent and data privacy aligns with a broader vision to democratize access to AI while maintaining control over data pipelines and infrastructure. The company's strategy of monetizing data without compromising privacy may prove to be a sustainable path to dominance in the AI market.
References:
[1] https://www.breitbart.com/tech/2025/08/27/anthropic-settles-authors-ai-copyright-lawsuit-that-had-potential-for-1-trillion-in-damages/
[2] https://www.theregister.com/2025/08/28/anthropic_five_year_data_retention/
[3] https://www.ainvest.com/news/anthropic-privacy-ai-strategy-blueprint-data-monetization-market-dominance-2508/
GOOGL--
Anthropic, an AI startup backed by Amazon and Google, is giving Claude users more control over their data use. Users can now choose whether their chats are fed into future model training, with a deadline to opt out set for September 28, 2025. The company is also extending data retention to five years for those who allow training, citing the importance of user participation in improving the AI's coding and reasoning capabilities.
Anthropic, an AI startup backed by Amazon and Google, has implemented significant changes to its data use policies, giving Claude users more control over their data. Effective September 28, 2025, users will have the option to opt out of having their chats fed into future model training. For those who consent, Anthropic will retain the data for up to five years, aiming to enhance the AI's coding and reasoning capabilities [2].The new policy is part of Anthropic's enterprise-first strategy, which has positioned the company as a leader in the enterprise LLM market, securing 32% market share in 2025 [3]. The company's revenue surged to $4 billion in 2025, driven by API partnerships and dominance in code generation and government contracts [3]. Amazon alone is projected to generate $1.28 billion from Anthropic’s usage in 2025 [3].
The changes come as Anthropic faces legal challenges. A recent class action copyright lawsuit alleging the company pirated books to train its AI models has been settled, though the terms are yet to be finalized [1]. Despite these legal hurdles, Anthropic continues to innovate, ensuring its AI solutions remain compliant with privacy laws and regulatory standards.
Anthropic's focus on user consent and data privacy aligns with a broader vision to democratize access to AI while maintaining control over data pipelines and infrastructure. The company's strategy of monetizing data without compromising privacy may prove to be a sustainable path to dominance in the AI market.
References:
[1] https://www.breitbart.com/tech/2025/08/27/anthropic-settles-authors-ai-copyright-lawsuit-that-had-potential-for-1-trillion-in-damages/
[2] https://www.theregister.com/2025/08/28/anthropic_five_year_data_retention/
[3] https://www.ainvest.com/news/anthropic-privacy-ai-strategy-blueprint-data-monetization-market-dominance-2508/

Divulgación editorial y transparencia de la IA: Ainvest News utiliza tecnología avanzada de Modelos de Lenguaje Largo (LLM) para sintetizar y analizar datos de mercado en tiempo real. Para garantizar los más altos estándares de integridad, cada artículo se somete a un riguroso proceso de verificación con participación humana.
Mientras la IA asiste en el procesamiento de datos y la redacción inicial, un miembro editorial profesional de Ainvest revisa, verifica y aprueba de forma independiente todo el contenido para garantizar su precisión y cumplimiento con los estándares editoriales de Ainvest Fintech Inc. Esta supervisión humana está diseñada para mitigar las alucinaciones de la IA y garantizar el contexto financiero.
Advertencia sobre inversiones: Este contenido se proporciona únicamente con fines informativos y no constituye asesoramiento profesional de inversión, legal o financiero. Los mercados conllevan riesgos inherentes. Se recomienda a los usuarios que realicen una investigación independiente o consulten a un asesor financiero certificado antes de tomar cualquier decisión. Ainvest Fintech Inc. se exime de toda responsabilidad por las acciones tomadas con base en esta información. ¿Encontró un error? Reportar un problema

Comentarios
Aún no hay comentarios