AI Deepfake Help

Someone Made a Deepfake of Me

Take a breath. Discovering a fake AI-generated image of yourself is shocking and violating. But you have options—and this content can be removed.

What they did is illegal. You are the victim of a crime.

This Is Not Your Fault

These images are fake. They were generated by AI using a regular photo of you—probably from social media. Having photos online doesn't make you responsible for this abuse.

The person who created these images committed a federal crime under the TAKE IT DOWN Act. You have every right to take action.

What To Do Right Now

Follow these steps in order—each one builds on the last

1

Don't Panic—And Don't Engage

If someone sent you the image or is threatening you, do NOT respond. Don't pay money, don't threaten back. Every response gives them more information. Just screenshot everything and move to step 2.

2

Document Everything

Screenshot the image(s), the URL(s) where they appear, any messages or usernames involved, and note the date. This evidence is critical for platform reports and potential legal action.

3

Report to the Platform

Most platforms have specific processes for non-consensual intimate images (NCII). Look for Report → Intimate images or similar. Clearly state the content is AI-generated and non-consensual. Under the TAKE IT DOWN Act, platforms must respond within 48 hours.

4

File with Google

Even if the source site is uncooperative, Google will remove deepfake content from search results. This prevents people from finding it by searching your name. Use Google's specific form for fake pornography removal.

5

Use StopNCII.org

If you have a copy of the deepfake, you can create a 'hash' (digital fingerprint) that will automatically block it on Meta, TikTok, Reddit, and other platforms—preventing further spread.

6

Consider Legal Action

Creating AI deepfakes without consent is now a federal crime under the TAKE IT DOWN Act and illegal in most states. You can file with police and/or sue for damages. If you know who created it, they face serious consequences.

Understanding AI Deepfakes

What Are These Images?

AI "deepfakes" or "nudify" tools take a regular photo and use artificial intelligence to generate a fake nude version. The technology has become disturbingly accessible—some apps work in seconds using just a single photo.

Important: These images are entirely fake. They're AI-generated, not real photos of you. Your actual body was never photographed.

This Is a Crime

Under the federal TAKE IT DOWN Act (2025), creating and distributing AI-generated intimate imagery without consent is a federal crime punishable by up to 2-3 years in prison.

Most states also have their own laws criminalizing deepfake porn, with penalties ranging from misdemeanors to felonies depending on the state and circumstances.

Who Creates These Images?

Understanding the "who" can help you decide your response

Someone You Know

A ex-partner, acquaintance, coworker, or classmate with access to your photos. If you can identify them, you have strong options for legal action and platform reporting.

Report to police, consider civil lawsuit

Random Trolls/Harassers

Someone you don't know targeting you online. These cases often involve images being posted on specific forums or sent to get a reaction.

Focus on platform removal and blocking spread

Sextortion Attempt

Someone threatening to release deepfakes unless you pay money. This is a form of extortion and should be reported to the FBI.

Don't pay, report to FBI IC3 at ic3.gov

Content Mill/Leak Site

Websites that mass-produce deepfakes of various people for clicks/profit. Less personal, but still harmful.

DMCA takedowns, Google delisting, hosting provider reports

Questions You Might Have

Can people tell it's fake?

Often, yes—if they look closely. AI-generated images frequently have telltale signs: lighting inconsistencies, blurry areas, strange body proportions, or artifacts. But most people don't analyze images that closely, which is why removal is important.

What if it's already been shared widely?

Even widely-shared content can be contained. Platforms remove NCII quickly once reported. Google delisting prevents people from finding it by searching your name. StopNCII.org can auto-block across multiple platforms. The content may not disappear entirely, but it can become very hard to find.

Should I tell friends/family before they see it?

That's a personal decision. Some victims feel better getting ahead of it—telling close contacts that fake images exist and to ignore any they might see. Others prefer to handle removal quietly. There's no wrong choice.

Can I find out who made it?

Sometimes. If they messaged you, their username and IP may be traceable. If the content was posted on a platform, law enforcement can subpoena records. If you strongly suspect someone specific, that can inform your legal strategy.

I'm underage—is this different?

Yes. AI-generated CSAM (child sexual abuse material) carries much more severe penalties for creators and platforms. Report immediately to the CyberTipline at cybertipline.org or call 1-800-843-5678. NCMEC's Take It Down tool specifically helps minors.

Will this affect my career/relationships?

The content is fake, and increasingly people understand that deepfakes exist. If needed, you can proactively explain the situation. Many employers and institutions are now aware of deepfake harassment. Your reputation is not defined by an AI-generated fabrication.

Support Resources

Cyber Civil Rights Initiative

24/7 crisis helpline for NCII victims

844-878-2274

FBI IC3

Report internet crimes including deepfakes

ic3.gov →

NCMEC (Minors)

If you're under 18

1-800-843-5678

RAINN

Sexual assault support

800-656-HOPE

You Can Take Back Control

AI deepfakes can be removed. The law is on your side. We've helped many people in your exact situation get this content taken down quickly and confidentially.

100% confidential • No judgment • We're here to help