Federal Judge Strikes Down California Deepfake Law
Our Privacy Policy has been updated! The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "ACCEPT", you acknowledge our privacy policy and consent to the use of cookies. 

CED Newsletters & Policy Alerts

Timely Public Policy insights for what's ahead

Action: On August 5, a Federal judge struck down a California deepfake law aimed at restricting AI-generated deepfake content during elections. California’s AB 2655, officially titled the Defending Democracy from Deepfake Deception Act of 2024, requires large online platforms to block the posting of materially deceptive content related to elections in California during specified periods, and requires platforms to label certain additional content inauthentic, fake, or false during those periods. The bill also requires platforms to develop procedures for state residents to report content that has not been blocked or labeled in compliance with the Act and authorizes affected parties the ability to seek injunctive relief against a platform for noncompliance with the Act.

Elon Musk’s platform X sued California last year challenging the Act, alleging it violates First Amendment free speech protections and imposes unnecessary burdens on online platforms.

Trusted Insights for What's Ahead®

  • Governor Gavin Newsom signed AB 2655 into law in September 2024 following through on a promise for action after Elon Musk shared a deepfake video of then-Vice President Kamala Harris describing herself as the “ultimate diversity hire” ahead of the election.
  • In striking down the law Judge John Menendez of the Eastern District of California cited Federal preemption of the state law for online platforms, saying that AB 2655 violates Section 230 of the Communications Decency Act, a 1996 law that protects platforms from civil liability for what is posted to them. Menendez did not offer an opinion on the free speech arguments from the plaintiffs, saying it was not necessary in order to strike down the law on Section 230 grounds.
  • Judge Menendez struck down another election deepfakes law in October 2024. AB 2839, officially titled “Elections: deceptive media in advertisements,” would have allowed any person to sue for damages over election deepfakes. Menendez ruled the Act a violation of the First Amendment, writing that it serves as “a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate."
  • Twenty-six states have passed laws regulating the use of political deepfakes. These laws have typically taken two approaches: prohibition and disclosure. Minnesota and Texas have laws prohibiting the publication of political deepfakes during a specified period before an election, while the other 24 states require the media to contain disclosures.
  • The Administration has taken action on regulating harmful AI-generated online content. President Trump in May signed legislation (S.146, Tools to Address Known Exploitation by Immobilizing Deepfakes on Websites and Networks Act, or the TAKE IT DOWN Act) that makes the posting of non-consensual intimate images (NCII) online, both authentic and AI-generated, without an individual’s consent a Federal crime.

Federal Judge Strikes Down California Deepfake Law

August 07, 2025

Action: On August 5, a Federal judge struck down a California deepfake law aimed at restricting AI-generated deepfake content during elections. California’s AB 2655, officially titled the Defending Democracy from Deepfake Deception Act of 2024, requires large online platforms to block the posting of materially deceptive content related to elections in California during specified periods, and requires platforms to label certain additional content inauthentic, fake, or false during those periods. The bill also requires platforms to develop procedures for state residents to report content that has not been blocked or labeled in compliance with the Act and authorizes affected parties the ability to seek injunctive relief against a platform for noncompliance with the Act.

Elon Musk’s platform X sued California last year challenging the Act, alleging it violates First Amendment free speech protections and imposes unnecessary burdens on online platforms.

Trusted Insights for What's Ahead®

  • Governor Gavin Newsom signed AB 2655 into law in September 2024 following through on a promise for action after Elon Musk shared a deepfake video of then-Vice President Kamala Harris describing herself as the “ultimate diversity hire” ahead of the election.
  • In striking down the law Judge John Menendez of the Eastern District of California cited Federal preemption of the state law for online platforms, saying that AB 2655 violates Section 230 of the Communications Decency Act, a 1996 law that protects platforms from civil liability for what is posted to them. Menendez did not offer an opinion on the free speech arguments from the plaintiffs, saying it was not necessary in order to strike down the law on Section 230 grounds.
  • Judge Menendez struck down another election deepfakes law in October 2024. AB 2839, officially titled “Elections: deceptive media in advertisements,” would have allowed any person to sue for damages over election deepfakes. Menendez ruled the Act a violation of the First Amendment, writing that it serves as “a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate."
  • Twenty-six states have passed laws regulating the use of political deepfakes. These laws have typically taken two approaches: prohibition and disclosure. Minnesota and Texas have laws prohibiting the publication of political deepfakes during a specified period before an election, while the other 24 states require the media to contain disclosures.
  • The Administration has taken action on regulating harmful AI-generated online content. President Trump in May signed legislation (S.146, Tools to Address Known Exploitation by Immobilizing Deepfakes on Websites and Networks Act, or the TAKE IT DOWN Act) that makes the posting of non-consensual intimate images (NCII) online, both authentic and AI-generated, without an individual’s consent a Federal crime.

More From This Series

Newsletters & Alerts
Newsletters & Alerts
Newsletters & Alerts
Newsletters & Alerts