Takeaways

Federal efforts to regulate AI in political advertising have not progressed.
In lieu of federal action, at least 40 states have introduced legislation that targets the production, distribution or publication of AI-generated content in regard to political advertising.
Broadcasters and other content creators and distributors should stay apprised of state and federal developments.

In the two months since we published our article on the use of artificial intelligence in political advertising, efforts to legislate and regulate the use of such media at the federal level in Congress and at the Federal Election Commission have largely stalled. States, however, have stepped in to fill the void in advance of the November 2024 general election. With eight months to go before that election, at least 40 states have addressed the use of AI in political advertising, with dozens of bills pending in state legislatures and laws on the books in a handful of states.

Generally, these bills place limitations on the production, distribution or publication of AI-generated content—manipulated photos, videos and audio (i.e., “fabricated media” or “deepfakes”) created with the intent of damaging a candidate or influencing an election. While most of the newly introduced bills follow the letter and spirit of the legislation we reviewed in January, there are some notable exceptions, including a bill that establishes a cause of action for falsely depicting a deceased person and a bill that requires that the original, unmodified version of the synthetic media be published to a website or submitted to the state board of election for publication. Legislatures in several states are considering more than one artificial intelligence in political advertising bill.

An Indiana bill introduced in January 2024, like many of the other state bills, prohibits the use of fabricated media in a political advertisement. However, unlike most pending and enacted legislation on this issue, this bill does not include a provision that limits liability for the ad’s creator if the ad includes a disclosure. The bill also establishes a cause of action against the person that pays for an ad that “includes fabricated media depicting a deceased individual without the consent of the person entitled to exercise and enforce the individual's rights of publicity.”

A bill introduced in Maryland in February 2024 requires a person who publishes, distributes or disseminates synthetic media to either post the original, unmodified version of the media to the person’s website or submit the original, unmodified version to the Maryland State Board of Elections for publication on its website. Additionally, while the bill provides exemptions for “news media entit[ies],” such entities may nonetheless be subject to penalties if they fail to comply with the specific mandates of the law. For entities paid to broadcast synthetic media, the requirements include making a “good faith effort to establish that the media is not synthetic media.”

Additionally, two states are taking a more full-throated approach to the regulation of AI, including the use of deepfakes and misinformation in election communications. Both Connecticut and California have convened working groups and conducted hearings to consider the adoption of comprehensive regulatory structures to address the usage of artificial intelligence technologies. Connecticut is particularly focused on adopting a state plan based on the framework proposed by U.S. senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), and SB.2 was introduced in February 2024 to implement many of the recommendations from the Connecticut working group.

In California, the state legislature and the California Privacy Protection Agency (CPPA) are separately considering proposals, with the CPPA considering final rules in connection with its 2023 Notice of Proposed Rulemaking (NPRM). The state legislature recently conducted a hearing to consider more than a dozen AI bills, including SB.896, which is based on President Biden’s October 2023 AI Executive Order and AI Bill of Rights.

Given the potential exposure to liability, broadcasters and other content creators and distributors should stay apprised of developments in state capitals and federal action. Please contact Pillsbury’s Communications Practice if you have any questions or would like to discuss.

These and any accompanying materials are not legal advice, are not a complete summary of the subject matter, and are subject to the terms of use found at: https://www.pillsburylaw.com/en/terms-of-use.html. We recommend that you obtain separate legal advice.