Authors Sue Anthropic Over AI Training on Copyrighted Books

Generated by AI AgentCoin World
Thursday, Jul 17, 2025 5:04 pm ET2min read
Aime RobotAime Summary

- A US judge approved a class-action lawsuit against Anthropic, allowing authors to claim copyright infringement over unauthorized use of their books to train the Claude AI model.

- The ruling rejects Anthropic's attempt to dismiss the case individually, consolidating legal pressure on AI firms facing scrutiny for data-gathering practices.

- Core issues include whether AI training constitutes "fair use" and whether copying copyrighted content for machine learning violates intellectual property laws.

- The case reflects a global trend of creators challenging AI companies, with similar disputes involving Getty Images, music labels, and Hollywood studios over unauthorized data usage.

- AI firms argue training mimics human learning, but artists demand accountability as courts define boundaries between inspiration and plagiarism in the AI era.

A US judge has granted a group of authors the right to jointly sue the AI company Anthropic over allegations that their copyrighted books were used without permission to train an AI model. This decision allows the authors to proceed as a class action lawsuit, marking a significant development in the ongoing debate between artists and AI firms over the use of copyrighted content.

The authors, all published professionals, claim that Anthropic trained its Claude chatbot on their copyrighted books without seeking permission or providing compensation. They argue that the company's use of their work to teach AI how to sound more human, including mimicking their styles and ideas, constitutes copyright infringement.

Judge

Chhabria, sitting in San Francisco, ruled that the authors shared enough common ground to make the case a class action. This means that instead of dozens of separate lawsuits, the case will proceed as a single, collective action, putting more legal pressure on AI developers who are already under scrutiny for their data-gathering practices.

The core issues at hand are whether Anthropic actually copied the authors' work and, if so, whether that use was "fair" or a violation of copyright law. Anthropic had attempted to dismiss the case by arguing that each writer should sue separately, but the judge rejected this, stating that the underlying issues were fundamentally the same and better addressed collectively.

This lawsuit is part of a broader trend where creative professionals worldwide are pushing back against what they see as unauthorized and unfair use of their work by AI companies. For instance,

is currently in a dispute with Stability AI over the use of millions of its photos without a license. In the music industry, major record labels are suing companies that create AI-generated songs, and music publishers have accused AI firms, including Anthropic, of using copyrighted music lyrics to train their models. In Hollywood, studios like have accused Midjourney of borrowing too freely from their film characters.

Anthropic and other AI firms argue that they are not stealing but rather training their models. They compare the process to a person reading many books and then writing something in their own words, suggesting that the AI is learning rather than copying. However, many artists are skeptical, especially when the AI-generated output closely resembles the original source material. The distinction between inspiration and plagiarism is a contentious issue, and the class action lawsuit moving forward could lead to financial compensation for authors and force AI companies to reconsider their data-gathering practices.

This legal battle extends beyond books and bots; it is about who profits from human creativity and whether machines should be allowed to learn from art without consent. As the AI boom continues, courts will play a crucial role in defining the boundaries of AI training and copyright law. The outcome of this case could influence how AI firms approach copyrighted work in the future and ensure that authors are not marginalized in the process.

Comments

ο»Ώ

Add a public comment...
No comments

No comments yet