MyTakedown
🔒Privacy

Google AI Overview said I was arrested for something I didn't do

Google AI generating false criminal allegations in search results

5 min readUpdated Mar 2026

You Googled your own name and right there at the top — in Google's AI Overview box — it says you were arrested. Except you weren't. Google's AI just made it up, and now anyone who searches your name sees a fabricated criminal allegation presented as fact by the world's most trusted search engine.

This is happening to more people than you think. Google's AI Overview feature synthesizes information from across the web, and when it gets things wrong, it gets them wrong in spectacularly damaging ways.1 A false arrest claim in an AI Overview box isn't buried on page 3 — it's the first thing people see.

🚨
AI Overviews sit at the very top of search results

Unlike a random blog post on page 5, an AI Overview appears above ALL organic results. It carries the implicit authority of Google. Most people read it and never scroll further.

How Google AI Gets It This Wrong

1
Name collision

Someone with your name was arrested, and the AI conflates the two of you. Especially common with common names.

2
Context misreading

An article mentions you and an arrest in the same piece (even if you're the victim or journalist) and the AI concludes you were arrested.

3
Source hallucination

The AI fabricates a claim that doesn't exist in any source material. This is a known issue with large language models.

4
Outdated or incorrect sources

The AI pulls from mugshot sites or data aggregators containing errors, and presents them as current fact.

40%
Of AI Overview responses contained errors in Stanford study
#1
Position on Google — above all results
0
Ways to edit an AI Overview yourself

What to Do About It

Your action plan
1
Document the AI Overview immediately

Screenshot the AI Overview box showing the false claim. Include the search query, date, and full text. AI Overviews can change, so capture it before it shifts.

2
Click "Report" on the AI Overview

Below every AI Overview, there's a thumbs down icon and a feedback option. Report it as inaccurate — state that the claim is factually false.

3
Submit a Google legal removal request

Go to Google's legal troubleshooter (support.google.com/legal) and submit a removal request for defamatory content. Reference the specific false claim.

4
Identify and address the source material

Search for the false information across Google to find where the AI might be sourcing this claim. If there's a mugshot site or erroneous article, get that removed too — a mugshot removal service can handle those quickly.

5
Build counter-content

Authoritative, positive content about you helps train the AI model toward accuracy. LinkedIn profiles, professional bios, authored articles — anything that creates a strong factual signal.

💡
Google's feedback mechanism actually works for AI Overviews

Unlike traditional search results, AI Overviews can be updated relatively quickly when reported. Google has been responsive to factual accuracy complaints, especially for defamatory content. Multiple reports increase the speed of correction.

We specialize in correcting false AI-generated search results. Free assessment of your situation.
Fix Your AI Search Results

The Bigger Problem: AI Search Is Not Going Away

Google AI Overviews are just the beginning. Bing Copilot, Perplexity, ChatGPT search, and Gemini are all generating summaries about real people — and they all have the same hallucination problem. If one AI gets your information wrong, other AIs can scrape that output and repeat the error, creating a self-reinforcing loop of false information.^2] Fixing Google alone may not be enough. You need to address false information across the AI ecosystem — and proactively seed accurate information that AI systems can learn from. An [AI search correction service can tackle this systematically across platforms.

Diagram showing how AI systems can create feedback loops — false info from one AI gets indexed and picked up by other AIs, amplifying the error
Style: Clean flow diagram with arrows showing circular amplification pattern

Can You Sue Google Over a False AI Overview?

This is a rapidly evolving legal area. Currently, Section 230 provides broad immunity to platforms for third-party content. But courts are debating whether AI-generated content qualifies as "third-party" or content Google itself created.3 Several lawsuits are pending. In the meantime, the practical approach is faster: report, remove, and build counter-content.

Before
Google AI Overview falsely states you were arrested. Every person who Googles your name sees a fabricated criminal record.
After
False AI Overview corrected, source material removed, professional content dominating search results. Your name tells your real story.

Frequently Asked Questions


Free Resource
AI Search Reputation Audit
We check what Google AI Overview, ChatGPT, Perplexity, Gemini, and Bing Copilot say about you — and flag any false or damaging claims across all platforms.
Get Your Free Audit

Sources & Citations

  1. 1
    Google acknowledged errors in AI Overviews and outlined improvements to reduce hallucinated content in search results. Google Blog
  2. 2
    Stanford HAI research on AI-generated misinformation amplification and feedback loops between AI systems. Stanford Institute for Human-Centered AI
  3. 3
    Congressional Research Service analysis of Section 230 applicability to AI-generated content. Congressional Research Service

Still need help?

Talk to Our Team →