Judge backs AI firm over use of copyrighted books

TruthLens AI Suggested Headline:

"US Judge Rules AI Training with Copyrighted Books is Transformative Use"

View Raw Article Source (External Link)
Raw Article Publish Date:
AI Analysis Average Score: 7.9
These scores (0-10 scale) are generated by Truthlens AI's analysis, assessing the article's objectivity, accuracy, and transparency. Higher scores indicate better alignment with journalistic standards. Hover over chart points for metric details.

TruthLens AI Summary

A significant ruling by a US judge has determined that the use of copyrighted books for training artificial intelligence (AI) models does not infringe upon US copyright law. This decision arose from a lawsuit filed against AI company Anthropic by three authors, including a novelist and two non-fiction writers, who accused the firm of unlawfully utilizing their works to develop its Claude AI model and establish a profitable business. In his ruling, Judge William Alsup emphasized that Anthropic's application of the authors' texts was 'exceedingly transformative,' thus qualifying as permissible under current legislation. However, the judge also denied Anthropic's motion to dismiss the case, mandating that the firm face trial regarding its alleged use of pirated copies to assemble its library. The ruling is particularly noteworthy as it addresses a pivotal issue in the ongoing debate surrounding the legality of training Large Language Models (LLMs) with existing creative material, a topic that is currently under scrutiny in various legal contexts across the industry.

Judge Alsup's remarks highlighted the distinction between transformative use and infringement, noting that the authors did not assert that the training process resulted in the generation of infringing replicas of their works. He stated that if such claims were made, the case would take on a different dimension. This ruling comes amidst a broader landscape of legal challenges faced by the AI sector, with companies like Disney and Universal also pursuing litigation against AI tools for copyright violations. In light of these ongoing disputes, some AI firms have begun to negotiate licensing agreements with original content creators. While Judge Alsup acknowledged Anthropic's defense of 'fair use,' he underscored that the company had violated the authors' rights by maintaining pirated copies in a vast 'central library.' Anthropic expressed satisfaction with the ruling regarding transformative use but disagreed with the need for a trial over the acquisition of certain books, indicating that they would explore their legal options moving forward. The case has drawn attention to the evolving relationship between AI technology and copyright law, with implications for both creators and developers in the digital age.

TruthLens AI Analysis

You need to be a member to generate the AI analysis for this article.

Log In to Generate Analysis

Not a member yet? Register for free.

Unanalyzed Article Content

A US judge has ruled that using books to train artificial intelligence (AI) software is not a violation of US copyright law. The decision came out of a lawsuit brought last year against AI firm Anthropic by three writers, a novelist, and two non-fiction authors, who accused the firm of stealing their work to train its Claude AI model and build a multi-billion dollar business. In his ruling, Judge William Alsup wrote that Anthropic's use of the authors' books was "exceedingly transformative" and therefore allowed under US law. But he rejected Anthropic's request to dismiss the case, ruling the firm would have to stand trial over its use of pirated copies to build their library of material. Anthropic, a firm backed by Amazon and Google's parent company, Alphabet, could face up to $150,000 in damages per copyrighted work. The firm holds more than seven million pirated books in a "central library" according to the judge. The ruling is among the first to weigh in on a question that is the subject of numerous legal battles across the industry - how Large Language Models (LLMs) can legitimately learn from existing material. "Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works, not to race ahead and replicate or supplant them — but to turn a hard corner and create something different," Judge Alsup wrote. "If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use," he said. He noted that the authors did not claim that the training led to "infringing knockoffs" with replicas of their works being generated for users of the Claude tool. If they had, he wrote, "this would be a different case". Similar legal battles have emerged over the AI industry's use of other media and content, from journalistic articles to music and video. This month, Disney and Universalfiled a lawsuitagainst AI image generator Midjourney, accusing it of piracy. The BBC is alsoconsidering legal actionover the unauthorised use of its content. In response to the legal battles, some AI companies have responded by striking deals with creators of the original materials, or their publishers, to license material for use. Judge Alsup allowed Anthropic's "fair use" defence, paving the way for future legal judgements. However, he said Anthropic had violated the authors' rights by saving pirated copies of their books as part of a "central library of all the books in the world". In a statement Anthropic said it was pleased by the judge's recognition that its use of the works was transformative, but disagreed with the decision to hold a trial about how some of the books were obtained and used. The company said it remained confident in its case, and was evaluating its options. A lawyer for the authors declined to comment. The authors who brought the case are Andrea Bartz, a best-selling mystery thriller writer, whose novels include We Were Never Here and The Last Ferry Out, and non-fiction writers Charles Graeber and Kirk Wallace Johnson.

Back to Home
Source: Bbc News