A U.S. federal judge has ruled that AI company Anthropic made “fair use” of copyrighted books while training its AI tools, including the chatbot Claude, without needing prior consent from authors.
The ruling, viewed as a win for AI developers, arrives amid heightened debate over how generative AI interacts with intellectual property laws. Industry leaders have been lobbying for flexible regulatory frameworks, even as authors and creators raise alarms about unauthorized data use.
“Like any reader aspiring to be a writer, Anthropic’s LLMs [large language models] trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,” US District Judge William Alsup said.
The class-action lawsuit, brought by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, claimed that Anthropic’s use of their literary works amounted to “large-scale theft” and exploitation of creative expression. However, Judge William Alsup sided with Anthropic on the training data, stating that the output from its AI model was “exceedingly transformative” and aligned with copyright’s intended role of supporting creativity and scientific advancement.
While this aspect was ruled in Anthropic’s favor, the court did not endorse all of the company’s practices. Judge Alsup found that Anthropic’s storage of over seven million pirated books in a centralized dataset violated copyright law and did not qualify for fair use protection. The company must now face trial in December to address those specific allegations.
The case reflects a broader legal and ethical dilemma: whether AI tools are facilitating innovation or simply industrializing the repurposing of original creative work to the detriment of human authors.
Quick Take
This ruling draws a partial but powerful line in the sand for AI developers—training with transformative outputs may be legal, but building datasets on pirated materials isn’t. It highlights a legal gray zone that courts are just beginning to define, and places pressure on AI companies to clean up how they acquire training data while still pushing innovation forward.





