Google AI Overview said I was arrested for something I didn't do
Google AI generating false criminal allegations in search results
You Googled your own name and right there at the top — in Google's AI Overview box — it says you were arrested. Except you weren't. Google's AI just made it up, and now anyone who searches your name sees a fabricated criminal allegation presented as fact by the world's most trusted search engine.
This is happening to more people than you think. Google's AI Overview feature synthesizes information from across the web, and when it gets things wrong, it gets them wrong in spectacularly damaging ways.1 A false arrest claim in an AI Overview box isn't buried on page 3 — it's the first thing people see.
Unlike a random blog post on page 5, an AI Overview appears above ALL organic results. It carries the implicit authority of Google. Most people read it and never scroll further.
How Google AI Gets It This Wrong
Someone with your name was arrested, and the AI conflates the two of you. Especially common with common names.
An article mentions you and an arrest in the same piece (even if you're the victim or journalist) and the AI concludes you were arrested.
The AI fabricates a claim that doesn't exist in any source material. This is a known issue with large language models.
The AI pulls from mugshot sites or data aggregators containing errors, and presents them as current fact.
What to Do About It
Screenshot the AI Overview box showing the false claim. Include the search query, date, and full text. AI Overviews can change, so capture it before it shifts.
Below every AI Overview, there's a thumbs down icon and a feedback option. Report it as inaccurate — state that the claim is factually false.
Go to Google's legal troubleshooter (support.google.com/legal) and submit a removal request for defamatory content. Reference the specific false claim.
Search for the false information across Google to find where the AI might be sourcing this claim. If there's a mugshot site or erroneous article, get that removed too — a mugshot removal service can handle those quickly.
Authoritative, positive content about you helps train the AI model toward accuracy. LinkedIn profiles, professional bios, authored articles — anything that creates a strong factual signal.
Unlike traditional search results, AI Overviews can be updated relatively quickly when reported. Google has been responsive to factual accuracy complaints, especially for defamatory content. Multiple reports increase the speed of correction.
The Bigger Problem: AI Search Is Not Going Away
Google AI Overviews are just the beginning. Bing Copilot, Perplexity, ChatGPT search, and Gemini are all generating summaries about real people — and they all have the same hallucination problem. If one AI gets your information wrong, other AIs can scrape that output and repeat the error, creating a self-reinforcing loop of false information.^2] Fixing Google alone may not be enough. You need to address false information across the AI ecosystem — and proactively seed accurate information that AI systems can learn from. An [AI search correction service can tackle this systematically across platforms.
Can You Sue Google Over a False AI Overview?
This is a rapidly evolving legal area. Currently, Section 230 provides broad immunity to platforms for third-party content. But courts are debating whether AI-generated content qualifies as "third-party" or content Google itself created.3 Several lawsuits are pending. In the meantime, the practical approach is faster: report, remove, and build counter-content.
Frequently Asked Questions
Sources & Citations
- 1Google acknowledged errors in AI Overviews and outlined improvements to reduce hallucinated content in search results. Google Blog ↗
- 2Stanford HAI research on AI-generated misinformation amplification and feedback loops between AI systems. Stanford Institute for Human-Centered AI ↗
- 3Congressional Research Service analysis of Section 230 applicability to AI-generated content. Congressional Research Service ↗
We Can Handle This For You
Prevent This From Happening Again
Ongoing monitoring and protection
People Also Asked
Still need help?
Talk to Our Team →