Deepfakes are AI-generated content including audio, video, images, and text. They can be misused in various ways, such as altering what someone says in a recording, changing their appearance in a picture, or creating fake explicit images. The growth of generative AI has outpaced efforts to regulate and control harmful deepfakes. In today’s world, deepfakes can spread false information and be used to harm people, making it harder to protect our images and trust what we see and hear.
I didn’t think about it at the time that it wasn’t his real voice. That’s how convincing it was.
The New Yorker
If you believe you’ve been harmed by Artificial Intelligence, please fill out our harms report form. We will get back to you within 48 hours on weekdays and 72 over the weekend.
If you are seeking legal advice or representation, consider reaching out to an ACLU office in your respective state.
Unmasking AI: My Mission to Protect What Is Human in a World of Machines (2023) by Dr. Joy Buolamwini details AI harms in emerging technologies. The book provides examples of deepfake harms (See Chapter 9, Pages 106-112).
The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act allows victims of deepfake abuse to sue the creators of nonconsensual deepfakes. Passed by the senate in 2024, this would be the first federal law protecting against harmful deepfakes if it passes.
The No Fakes Act, introduced to the U.S. Senate in Fall 2023, sets new standards for image rights, protecting people from from having deepfake digital twins of them made without consent.
The National Security Agency, Federal Bureau of Investigation, and Cybersecurity Infrastructure Security Agency released a Cybersecurity Information Sheet providing information about synthetic media threats and how deepfake technology can be used for malicious purposes.
Detect Fakes from the MIT Media Lab plus Northwestern University researchers and Media Literacy in the Age of Deepfakes by the MIT Center for Advanced Virtuality are two projects that help people better understand and identify AI-created content.
ControlAI’s Ban Deepfakes Campaign calls for making deepfakes illegal and holding creators accountable. Individuals can sign an open letter to urge lawmakers to take steps to protect people from deepfake harms.
The Another Body film tells the story of Taylor Klein (pseudonym), a college student who discovers that a classmate created an explicit deepfake using her image. Taylor’s story led to the creation of the #MyImageMyChoice movement, which raises awareness about explicit deepfakes and supports victims of online image abuse.
Stay up to date with the movement towards equitable and accountable AI.
SIGN UP