tiktok deepfake of my face on porn how to remove
Someone used AI to put your face on pornographic content and posted it on TikTok
Someone took your face — probably from your social media — and used AI to put it on pornographic content. Maybe a friend told you. Maybe you found it yourself. Either way, you're feeling violated, furious, and scared. That reaction is completely valid.
AI-generated deepfake pornography is one of the fastest-growing forms of image-based abuse, and the law is finally catching up.1 You have real options for getting this content removed and holding the creator accountable. Here's what to do.
At least 10 states have laws specifically criminalizing deepfake pornography, and federal legislation (the DEFIANCE Act) is advancing through Congress. Even without specific deepfake laws, existing revenge porn statutes, harassment laws, and DMCA protections cover most situations.
Why Deepfake Porn Is Exploding
The tools to create deepfakes used to require serious technical skill. Now, free apps and websites let anyone generate convincing fakes in minutes using a handful of photos scraped from Instagram or TikTok.2 The barrier to entry is essentially zero, which is why this problem is growing exponentially.
Immediate Steps to Take
Screenshot the content, the account that posted it, the URL, the platform, the upload date, and any comments. Use screen recording if it's a video. Do NOT share the content — keep it as evidence only.
TikTok has a dedicated reporting category for synthetic/manipulated media. Go to the video, tap the share arrow, tap Report, select "Fake/Misleading" or "Nudity/Sexual activity." Include that it's AI-generated in the description.
If the AI used your photos as source material, you likely hold copyright on those original images. A DMCA takedown notice compels the platform to remove the content within 24-72 hours.
File at ic3.gov. AI-generated intimate imagery is an active enforcement priority. Include all documentation — URLs, screenshots, any identifying info about the creator.
Create a hash of the deepfake content so it gets automatically flagged if uploaded to participating platforms (Meta, TikTok, Reddit, Pornhub, and others).
TikTok's automated systems can detect and remove synthetic media relatively quickly compared to other platforms. In-app reports for this category are typically reviewed within 24-48 hours. If the standard report fails, escalate through TikTok's legal request form.
Legal Protections You Have Right Now
The legal landscape for deepfake victims is evolving fast. Here's where things stand:
Federal level: The SHIELD Act (2022) and the proposed DEFIANCE Act target non-consensual deepfake pornography specifically. The TAKE IT DOWN Act passed the Senate in 2024 and would require platforms to remove NCII (including AI-generated) within 48 hours of a report.3
State level: States including California, Texas, Virginia, New York, and Minnesota have enacted deepfake-specific criminal or civil statutes. Many existing revenge porn laws also cover synthetic content.
Platform policies: TikTok, Instagram, Facebook, and YouTube all explicitly ban synthetic intimate imagery. Platform violation is often the fastest removal path.
If You Know Who Created It
In many cases, the creator is someone you know — an ex, a classmate, a coworker. If you have any idea who made the deepfake:
Confrontation tips them off and gives them time to delete evidence. Document what you know and involve law enforcement first.
Provide all evidence including the content, your original photos that were used as source material, and any communications or social connections suggesting the creator's identity.
You may have grounds for a civil lawsuit under harassment, defamation, intentional infliction of emotional distress, or state-specific deepfake laws. Many attorneys in this space offer free consultations.
Protecting Yourself From Future Deepfakes
You can't fully prevent deepfakes — anyone with public photos is a potential target. But you can make yourself a harder target and set up early detection:
Deepfakes need clear, front-facing photos to work well. Consider limiting the number of high-resolution face photos publicly available on your social media.
Reverse image search alerts, Google Alerts for your name, and professional monitoring services can catch deepfakes early — before they spread to additional platforms.
You don't have to wait for content to appear. Creating hashes of your photos proactively means participating platforms will block deepfakes if they surface.
Frequently Asked Questions
Sources & Citations
- 1Home Security Heroes 2023 study found a 550% increase in deepfake videos online since 2019, with 98% being pornographic. Home Security Heroes ↗
- 2Sensity AI (formerly DeepTrace) research on the proliferation of deepfake creation tools and non-consensual pornography. Sensity AI ↗
- 3TAKE IT DOWN Act (S.4569) passed the U.S. Senate, requiring platforms to remove non-consensual intimate images including AI-generated content within 48 hours. U.S. Congress ↗
We Can Handle This For You
Prevent This From Happening Again
Ongoing monitoring and protection
People Also Asked
Still need help?
Talk to Our Team →