Judge Rejects AI Company’s $1.5 Billion Settlement with Book Authors
Our Privacy Policy has been updated! The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "ACCEPT", you acknowledge our privacy policy and consent to the use of cookies. 

CED Newsletters & Policy Alerts

Timely Public Policy insights for what's ahead

Action: In August 2024, three authors sued AI developer Anthropic, arguing that the company had violated federal copyright law by using their books – copied from both pirated and purchased sources – in training its AI models. In July, Judge William Alsup, Senior District Judge in the Northern District of California, ruled that the company’s use of copyrighted books it had purchased to train its models constitutes fair use. However, Judge Alsup declined to grant Anthropic’s assertion that its use of pirated books it collected over the internet was also fair use and indicated that question would go to trial. The court also certified a class comprising the rights of owners for books downloaded from the pirate sites by Anthropic.

On September 5, the company agreed to pay $1.5 billion to the class, which includes about 500,000 authors. However, Judge Alsup denied the motion to approve the settlement saying that it lacked sufficient detail about the claims process for the settlement and Anthropic’s legal liability for claims going forward. The court scheduled a September 25 hearing to review a revised settlement.

Trusted Insights for What's Ahead®

  • Anthropic faces significant legal risk if the case were to go to trial as statute permits fines of up to $150,000 per intentional copyright violation, and the company’s library of pirated books contains more than 7 million titles. Some involved in the case expressed concern that the delay and potential revisions could cause the settlement to unravel.
  • If ultimately approved by the court, the settlement would represent a watershed moment in the legal debate about copyright protections for media used to train AI models and have broad implications for the AI industry as several other model developers, including OpenAI and Meta, also reportedly used the same repository of pirated books as Anthropic.
  • It is unclear how the AI industry would respond to the Anthropic case. Some companies, for example, have reached agreements with copyright holders (including news organizations) to license their material. It is also unclear how other courts may rule in similar cases.

Judge Rejects AI Company’s $1.5 Billion Settlement with Book Authors

September 11, 2025

Action: In August 2024, three authors sued AI developer Anthropic, arguing that the company had violated federal copyright law by using their books – copied from both pirated and purchased sources – in training its AI models. In July, Judge William Alsup, Senior District Judge in the Northern District of California, ruled that the company’s use of copyrighted books it had purchased to train its models constitutes fair use. However, Judge Alsup declined to grant Anthropic’s assertion that its use of pirated books it collected over the internet was also fair use and indicated that question would go to trial. The court also certified a class comprising the rights of owners for books downloaded from the pirate sites by Anthropic.

On September 5, the company agreed to pay $1.5 billion to the class, which includes about 500,000 authors. However, Judge Alsup denied the motion to approve the settlement saying that it lacked sufficient detail about the claims process for the settlement and Anthropic’s legal liability for claims going forward. The court scheduled a September 25 hearing to review a revised settlement.

Trusted Insights for What's Ahead®

  • Anthropic faces significant legal risk if the case were to go to trial as statute permits fines of up to $150,000 per intentional copyright violation, and the company’s library of pirated books contains more than 7 million titles. Some involved in the case expressed concern that the delay and potential revisions could cause the settlement to unravel.
  • If ultimately approved by the court, the settlement would represent a watershed moment in the legal debate about copyright protections for media used to train AI models and have broad implications for the AI industry as several other model developers, including OpenAI and Meta, also reportedly used the same repository of pirated books as Anthropic.
  • It is unclear how the AI industry would respond to the Anthropic case. Some companies, for example, have reached agreements with copyright holders (including news organizations) to license their material. It is also unclear how other courts may rule in similar cases.

More From This Series

Newsletters & Alerts

2026 Social Security COLA

October 30, 2025

Newsletters & Alerts
Newsletters & Alerts
Newsletters & Alerts