MyTakedown
← Help Center/Harassment
🛡️Harassment

tiktok deepfake of my face on porn how to remove

Someone used AI to put your face on pornographic content and posted it on TikTok

5 min readUpdated Mar 2026

Someone took your face — probably from your social media — and used AI to put it on pornographic content. Maybe a friend told you. Maybe you found it yourself. Either way, you're feeling violated, furious, and scared. That reaction is completely valid.

AI-generated deepfake pornography is one of the fastest-growing forms of image-based abuse, and the law is finally catching up.1 You have real options for getting this content removed and holding the creator accountable. Here's what to do.

🚨
This is a crime in a growing number of states

At least 10 states have laws specifically criminalizing deepfake pornography, and federal legislation (the DEFIANCE Act) is advancing through Congress. Even without specific deepfake laws, existing revenge porn statutes, harassment laws, and DMCA protections cover most situations.

Why Deepfake Porn Is Exploding

550%
Increase in deepfake porn since 2019
98%
Of deepfake videos are pornographic
99%
Of deepfake porn targets women

The tools to create deepfakes used to require serious technical skill. Now, free apps and websites let anyone generate convincing fakes in minutes using a handful of photos scraped from Instagram or TikTok.2 The barrier to entry is essentially zero, which is why this problem is growing exponentially.

Immediate Steps to Take

Your action plan
1
Document everything before it disappears

Screenshot the content, the account that posted it, the URL, the platform, the upload date, and any comments. Use screen recording if it's a video. Do NOT share the content — keep it as evidence only.

2
Report to TikTok immediately

TikTok has a dedicated reporting category for synthetic/manipulated media. Go to the video, tap the share arrow, tap Report, select "Fake/Misleading" or "Nudity/Sexual activity." Include that it's AI-generated in the description.

3
File a DMCA takedown

If the AI used your photos as source material, you likely hold copyright on those original images. A DMCA takedown notice compels the platform to remove the content within 24-72 hours.

4
Report to the FBI's IC3

File at ic3.gov. AI-generated intimate imagery is an active enforcement priority. Include all documentation — URLs, screenshots, any identifying info about the creator.

5
Use StopNCII.org

Create a hash of the deepfake content so it gets automatically flagged if uploaded to participating platforms (Meta, TikTok, Reddit, Pornhub, and others).

💡
TikTok is actually faster than most platforms

TikTok's automated systems can detect and remove synthetic media relatively quickly compared to other platforms. In-app reports for this category are typically reviewed within 24-48 hours. If the standard report fails, escalate through TikTok's legal request form.

We handle deepfake removal across all major platforms. DMCA-backed takedowns with documented results.
Get It Removed

The legal landscape for deepfake victims is evolving fast. Here's where things stand:

Federal level: The SHIELD Act (2022) and the proposed DEFIANCE Act target non-consensual deepfake pornography specifically. The TAKE IT DOWN Act passed the Senate in 2024 and would require platforms to remove NCII (including AI-generated) within 48 hours of a report.3

State level: States including California, Texas, Virginia, New York, and Minnesota have enacted deepfake-specific criminal or civil statutes. Many existing revenge porn laws also cover synthetic content.

Platform policies: TikTok, Instagram, Facebook, and YouTube all explicitly ban synthetic intimate imagery. Platform violation is often the fastest removal path.

US map showing states with deepfake pornography laws highlighted in color, with a legend distinguishing criminal vs. civil statutes
Style: Clean data visualization — simplified US map with color-coded state highlights

If You Know Who Created It

In many cases, the creator is someone you know — an ex, a classmate, a coworker. If you have any idea who made the deepfake:

1
Do not confront them directly

Confrontation tips them off and gives them time to delete evidence. Document what you know and involve law enforcement first.

2
File a police report

Provide all evidence including the content, your original photos that were used as source material, and any communications or social connections suggesting the creator's identity.

3
Consult a civil attorney

You may have grounds for a civil lawsuit under harassment, defamation, intentional infliction of emotional distress, or state-specific deepfake laws. Many attorneys in this space offer free consultations.

Protecting Yourself From Future Deepfakes

You can't fully prevent deepfakes — anyone with public photos is a potential target. But you can make yourself a harder target and set up early detection:

1
Audit your public photos

Deepfakes need clear, front-facing photos to work well. Consider limiting the number of high-resolution face photos publicly available on your social media.

2
Set up monitoring

Reverse image search alerts, Google Alerts for your name, and professional monitoring services can catch deepfakes early — before they spread to additional platforms.

3
Register with StopNCII.org preemptively

You don't have to wait for content to appear. Creating hashes of your photos proactively means participating platforms will block deepfakes if they surface.

Before
Deepfake video circulating on TikTok. No reports filed. Content spreading to other platforms. No legal record.
After
DMCA takedowns filed. Platform reports escalated. FBI complaint on record. StopNCII hash blocking re-uploads.

Frequently Asked Questions


Free Resource
Deepfake Victim Response Guide
Complete checklist: platform reporting links, DMCA template, law enforcement contacts, and evidence documentation guide for AI-generated intimate imagery.
Download Free Guide

Sources & Citations

  1. 1
    Home Security Heroes 2023 study found a 550% increase in deepfake videos online since 2019, with 98% being pornographic. Home Security Heroes
  2. 2
    Sensity AI (formerly DeepTrace) research on the proliferation of deepfake creation tools and non-consensual pornography. Sensity AI
  3. 3
    TAKE IT DOWN Act (S.4569) passed the U.S. Senate, requiring platforms to remove non-consensual intimate images including AI-generated content within 48 hours. U.S. Congress

Prevent This From Happening Again

Ongoing monitoring and protection

Still need help?

Talk to Our Team →