Take ActionAboutSpotlightlibraryCONTACTresearchprojectstalks/EVENTSpolicy/advocacyexhibitionseducationPRESS
Follow AJL on TwitterFollow AJL on InstagramFollow AJL on Facebook

AI Deepfakes

Deleting Deception, Manipulation, and Misinformation

AI Deepfakes
overviewexamples in the worldsupportResourcesSimilar harms
OVERVIEWExamples in the worldSUPPORTResourcesSimiliar harms

AI Deepfakes

overview

REport harm

Deepfakes are AI-generated content including audio, video, images, and text. They can be misused in various ways, such as altering what someone says in a recording, changing their appearance in a picture, or creating fake explicit images. The growth of generative AI has outpaced efforts to regulate and control harmful deepfakes. In today’s world, deepfakes can spread false information and be used to harm people, making it harder to protect our images and trust what we see and hear.

I didn’t think about it at the time that it wasn’t his real voice. That’s how convincing it was.

The New Yorker

The New Yorker

The Terrifying A.I. Scam That Uses Your Loved One’s Voice

Read more

AI Deepfakes

In the World

Political Misinformation and Disinformation

Deepfakes can be used to make politicians say things they never did and to create fake news reports. The use of AI in these ways can spread false information and propaganda.

  • During the 2024 U.S. election primaries, an AI-generated robocall using President Biden’s voice spread false information to discourage voting, marking an early use of AI in election disinformation. 
  • In India’s 2024 elections, political parties spent $50 million creating AI-generated campaign materials for political candidates. 
  • International governments have used deepfake technology to create fake U.S. news broadcasts to stir division in America.

Fake Digital Twins

AI technologies can be used to create fake voices and videos of people, making them say anything and even speak different languages.

  • 🏆After criminals used a voice deepfake to impersonate her daughter, Jennifer DeStefano testified before Congress about the need for stronger laws to protect people from these scams.
  • In 2024, OpenAI came under fire for creating an AI assistant’s voice that sounded very similar to actress Scarlett Johansson, the star of a popular film about AI agents.
  • Celebrities like Tom Hanks have warned their fans about deepfakes with their appearance being used in ads on popular platforms like YouTube, without the actors’ knowledge or permission.

Explicit Deepfakes

Explicit Deepfakes or “deepnudes” are made by taking someone’s face and placing it on a nude body. This is a particularly harmful form of abuse online and creating these images of minors is considered illegal child sexual abuse material.

  • 🏆Two U.S. high schoolers, Francesca and Elliston, testified before Congress about their experiences with explicit deepfakes, urging lawmakers to hold creators responsible.
  • 🏆In 2024, eight U.S. states passed laws against non-consensual explicit deepfakes, making their creation illegal and allowing victims to sue. 
  • In 2024, an explicit deepfake of international celebrity Taylor Swift went viral, racking up 27 million views before the poster was banned.

Identity Stealing for Fraud

Deepfakes have been used to trick facial recognition systems, allowing criminals to steal people’s identities and commit fraud.

  • In 2024, a woman named Robin was tricked into giving money to criminals who used a deepfake of her mother-in-law’s voice to threaten her.

Fake Romance Scams

Scammers can use deepfakes to help with deceiving people into online romantic relationships. Often these target elderly people to take their money.

  • Kate, a 69-year old widow, lost $39,000 to an online romance scam. The FBI reported that Americans lost over $650 million to similar scams that use deepfaked images of people and often target older people.

REPORT HARM
REPORT HARM
REPORT HARM
REPORT HARM

Amplify Your Voice. Support the Movement.

REport harm

You believe you have been
harmed by AI

If you believe you’ve been harmed by Artificial Intelligence, please fill out our harms report form. We will get back to you within 48 hours on weekdays and 72 over the weekend.

You are seeking advice

If you are seeking legal advice or representation, consider reaching out to an ACLU office in your respective state.

REport harm

AI Deepfakes

resources

Harms resource
Unmasking AI

Unmasking AI: My Mission to Protect What Is Human in a World of Machines (2023) by Dr. Joy Buolamwini details AI harms in emerging technologies. The book provides examples of deepfake harms (See Chapter 9, Pages 106-112).

Harms resource
The DEFIANCE Act

The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act allows victims of deepfake abuse to sue the creators of nonconsensual deepfakes. Passed by the senate in 2024, this would be the first federal law protecting against harmful deepfakes if it passes.

Harms resource
No Fakes Act

The No Fakes Act, introduced to the U.S. Senate in Fall 2023, sets new standards for image rights, protecting people from from having deepfake digital twins of them made without consent.

Harms resource
NSA Cybersecurity Information Sheet

The National Security Agency, Federal Bureau of Investigation, and Cybersecurity Infrastructure Security Agency released a Cybersecurity Information Sheet providing information about synthetic media threats and how deepfake technology can be used for malicious purposes.

Harms resource
Learn to Detect Deepfakes Projects

Detect Fakes from the MIT Media Lab plus Northwestern University researchers and Media Literacy in the Age of Deepfakes by the MIT Center for Advanced Virtuality are two projects that help people better understand and identify AI-created content.

Harms resource
Ban Deepfakes Petition

ControlAI’s Ban Deepfakes Campaign calls for making deepfakes illegal and holding creators accountable. Individuals can sign an open letter to urge lawmakers to take steps to protect people from deepfake harms.

Harms resource
Another Body

The Another Body film tells the story of Taylor Klein (pseudonym), a college student who discovers that a classmate created an explicit deepfake using her image. Taylor’s story led to the creation of the #MyImageMyChoice movement, which raises awareness about explicit deepfakes and supports victims of online image abuse.

SIMILAR Harms

harm
AI and Employment
See all harms

Join the Algorithmic Justice League Newsletter.

Stay up to date with the movement towards equitable and accountable AI.

SIGN UP

@AJLUNITED

FOLLOW US ON SOCIAL
TWITTER
FACEBOOK
LINKEDIN
YOUTUBE
View on twitter
View on twitter
View on instagram
View on instagram
View on instagram
View on twitter
FOLLOW US

#CodedBias #EquitableAI #AccountableAI #InclusiveAI #ResponsibleAI #EthicalAI #AIbias #AIharms #MachineBias #ArtificialIntelligence #InclusiveTech #AJL #AlgorithmicJusticeLeague

Navigate
  • Home
  • Take Action
  • About
  • Spotlight
  • Library
  • Learn MorePrivacy Policy
our library
  • Research
  • Projects
  • Talks/Events
  • Policy/Advocacy
  • Exhibitions
  • Education
  • Press
contact us
  • Get in Touch
  • Share Your Story
  • Journalists
  • Donate
  • Twitter
  • Instagram
  • Facebook
  • LinkedIn
©Algorithmic Justice League 2025
Powered by Casa Blue