Screw the money — Anthropic’s $1.5B copyright settlement sucks for writers
Anthropic settled a class-action lawsuit for $1.5 billion, making approximately 500,000 writers eligible for payments of at least $3,000. This settlement addresses Anthropic's illegal downloading of books from "shadow libraries" to train its AI, rather than challenging the broader legality of training AI on copyrighted material, which a federal judge has ruled falls under "fair use." The outcome is presented as a win for tech companies, as the core issue of AI utilizing copyrighted works without explicit author permission remains largely unchallenged in this specific case.
QUICK TAKEAWAYS
- Anthropic agreed to a $1.5 billion settlement with authors for copyright infringement.
- The settlement compensates writers for Anthropic's piracy of books, not for AI training on copyrighted material itself.
- Approximately 500,000 writers are eligible to receive at least $3,000 each.
- A federal judge previously ruled that training AI on copyrighted material is "transformative" and falls under fair use.
- The settlement sets a legal precedent primarily concerning the acquisition of training data rather than its use by AI.
KEY POINTS
- Anthropic faced a class-action lawsuit (Bartz v. Anthropic) for illegally acquiring millions of books from "shadow libraries" to train its Claude AI.
- The $1.5 billion settlement is the largest in U.S. copyright law history, benefiting around 500,000 eligible writers.
- The settlement was prompted by the piracy of books, not by the act of feeding copyrighted works to AI for training purposes.
- In a separate ruling in June, Judge William Alsup sided with Anthropic, deeming AI training on copyrighted material legal under the "fair use" doctrine, describing it as "transformative."
- The "fair use" doctrine, last updated in 1976, is being applied to modern AI use cases, defining AI's learning process as creating "something different" rather than replicating.
PRACTICAL INSIGHTS
- AI companies must ensure legal acquisition of training data to avoid costly copyright infringement lawsuits, even if the subsequent use for AI training is deemed fair.
- The "fair use" defense for AI training on copyrighted material appears strong in the current legal landscape, based on the precedent set by Judge Alsup's ruling.
- Writers and creative industries face ongoing challenges regarding the ethical and legal implications of AI use of their works, with monetary settlements for piracy not fully addressing concerns about AI's impact on their livelihoods.
- The copyright law, specifically the "fair use" doctrine, is being tested and reinterpreted for AI applications, indicating a need for potential legislative updates.
PRACTICAL APPLICATION
This information highlights the critical distinction between legal data acquisition and the legal use of data for AI training. For AI developers, it underscores the necessity of robust legal compliance in sourcing training datasets to prevent significant financial penalties, even as the "fair use" argument for the transformative use of copyrighted content by AI gains ground. For content creators, it suggests that while outright piracy of their works for AI training can result in compensation, the use of legally acquired copyrighted material for AI training may not currently be a protectable claim under existing copyright interpretations, prompting a need for continued advocacy for updated copyright protections in the age of AI.