Media Coverage
Source: PRNEWS
Media Coverage
Press Contacts: Erik Cummins, Matt Hyams, Taina Rosa, Olivia Thomas
07.13.20
Deepfakes, or Artificial Intelligence-generated synthetic videos, have been on the crisis communications periphery and have imitated celebrities and politicians since 2017. Now, communication professionals in every industry should be prepared to combat a reputation nosedive as a result of this harmful tech.
Carolyn Toto, Intellectual Property partner at Pillsbury in Los Angeles, said deepfakes are here to stay, "and will, unfortunately, become harder to identify as AI technology gets more sophisticated."
Legal claims against deepfakes could range from intellectual property infringement or invasion of privacy to libel and harassment.
"However, the legal precedent is sparse, and the legality of deepfakes can be complicated," Toto cautioned.
The complexity of digital media ownership can throw a wrench into the litigation process for brands that want to sue those responsible for creating or distributing deepfakes related to their companies.
Depending on where the original image was obtained, said Toto, "there might be grounds for a [copyright infringement] claim, but the owner may not always be the person whose reputation is being harmed."
For instance, she notes, if a deepfake uses a photograph of a celebrity, the copyright owner might be the photographer, not the celebrity.
Similarly, if the image of a celebrity is synced in a video that someone else created originally, Toto said the copyright claim would belong to the owner of the original video content rather than the star featured.