April 22, 2024 - by Pamela Langham

Deepfakes Continue to Rattle

Deepfakes and the misuse of synthetic content pose a clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains.
U.S. Department of Homeland Security

Deepfakes and voice cloning utilize a form of artificial intelligence (AI) that can create convincing yet fraudulent audiovisual content, such as videos, pictures, or sounds, depicting the likeness of individuals or events that never happened. With the onset of AI, anyone can easily create these deepfakes, altering the face or voice of an unfortunate victim. They have been spreading havoc on unsuspecting victims, sometimes without recourse. 

 

Recently, the Baltimore Public County School system investigated the principal at Pikesville High School for an audio recording that made racist and antisemitic comments about students and staff. Turns out the audio recording was AI generated. What a nightmare for the principal, the students, staff, the school system, and the public. Numerous educational institutions nationwide are confronting the challenge of handling the unauthorized distribution of deepfake nude images that violate the privacy of their students within the school community.

 

Deepfakes have the potential to erode the evidentiary system that our courts are based upon posing challenges for attorneys and the court system. As discussed previously in “Deepfakes and Voice Cloning: The Upcoming Evidentiary Crisis,” MSBA Blog (March 21, 2024), attorneys may want to practice with a heightened awareness that images, videos, or recordings may have been manipulated by AI. The authenticity of evidence will require more vigilance and scrutiny to ensure the integrity of the court system.

 

Fortunately, deepfake detector tools have been developed. The most popular are made by Sentinel, Intel, WeVerify, Microsoft, and Phoneme-Viseme Mismatches. Researchers are also developing additional ways to detect deepfakes. One method is to use AI to spot color or structural abnormalities. Another method uses digital watermarks in the form of pixels or audio frequencies. The pixels or frequencies can be embedded into a video or image to prove authenticity. Metadata that is encrypted can also be added to the media to prove authenticity. A Maryland professor is developing a cryptographic QR code-based system that can verify whether content has been edited from its original form. The underlying principle is that a QR code, tied to the speaker’s voice, ensures the integrity and verifiability of the recording, providing a safeguard against deepfake technology and other forms of audio manipulation. These deepfake detector tools are extremely useful, but more laws are needed to deter harmful deepfakes. 

 

Tennessee recently signed into law the Ensuring Likeness Voice and Image Security Act (ELVIS), effective July 1, 2024. ELVIS represents a significant milestone in the fight against AI generated voice cloning. The new law introduces the term “voice” to the Tennessee Personal Rights Protection Act of 1984 making the unauthorized commercial use of a voice that is clearly identifiable to an individual a misdemeanor offense. It also establishes a private right of action for damages in such cases. The Act contains a provision for treble damages in instances where a defendant knowingly uses an unauthorized voice replica. Finally, and possibly more importantly, it targets AI platforms by allowing legal action against entities providing the means to create unauthorized voice recordings. To read more about the ELVIS ACT, please read Matthew D. Kohel and  Francelina M. Perdoma Klukoskey's  article, The ELVIS Act: Tennessee Law Addresses AI's Impact on the Music Industry, MSBA Blog (April 22, 2024).

 

Other states have passed laws targeting deepfakes. Some states have specifically criminalized non-consensual deepfake porn. Others have created laws giving victims a private cause of action against someone who created images using their likeness without consent. Several bills have been introduced in Congress to prohibit unauthorized deepfakes, but debate on the matter is still ongoing. 

 

Numerous individuals have fallen victim to deepfakes including politicians, celebrities, and ordinary people, damaging reputations and spreading false narratives. Needless to say, unauthorized deepfakes are impacting legal proceedings. The danger lies in the erosion of truth, authenticity and harm to individuals. It is inevitable that more laws will be passed targeting unauthorized deepfakes. Lawyers may want to consider familiarizing themselves with the recent legislative changes and upcoming laws to stay ahead of the upcoming litigation wave. In the interim, it would be prudent to explore the emerging technologies capable of identifying deepfakes and verifying authentic content. This knowledge could prove invaluable in navigating the changing legal environment.