
A victim of Jeffrey Epstein filed a class-action lawsuit Thursday against Google, saying the company’s AI Mode feature published personal information about the sex trafficker’s victims.
In answer to legislative actionThe Justice Department began releasing more than 3 million pages of evidence in your case against Epstein in batches from the end of last year to the beginning of this year. But the release has been considered problematic, as the names of some predators drafted while the identities of several survivors were revealed through inadequate redactions.
“The United States, acting through the Department of Justice, made a deliberate policy decision to prioritize rapid, high-volume disclosure over protecting the privacy of Epstein survivors,” according to the lawsuit filed in the United States District Court for the Northern District of California. The lawsuit claims that survivors have not only had to relive their trauma but have also been victims of harassment since their information became public.
Although the Justice Department later removed the errors, Google’s artificial intelligence search function kept the information online. AI Modeclaims the plaintiff.
“Even after the government recognized that the disclosure violated survivors’ rights and removed the information, online entities like Google continually republished it, refusing victims’ pleas to remove it,” the lawsuit says.
When searching for the name of the plaintiff, who calls herself “Jane Doe,” as well as the names of other victims she represents in this lawsuit, Google’s AI mode displayed her “full name, contact information, cities of residence, and association with Jeffrey Epstein,” the lawsuit alleges. In the plaintiff’s case, the AI also “generated a hypertext link that allows anyone to send a direct email to the plaintiff with the click of a button.”
The lawsuit claims that the victim notified Google about the issue on multiple occasions over the past two months, to no avail.
“Despite receiving actual notice of the violations, the substantial harm caused by their continued dissemination, and the status of many group members as survivors of sexual abuse entitled to greater privacy protections under the law, Google has failed and refuses to remove, deindex, or block access to the offending materials,” the lawsuit states. “Notably, several other publicly available AI tools that generate content by analyzing online sources, such as ChatGPT, Claude, and Perplexity, did not provide victim-related information in similar repeated tests.”
Unlike Google Search, AI Mode “is not a neutral search index; it is an active recommender and content generator,” the lawsuit argues, and could be alleged as “actionable doxxing.”
The lawsuit comes at the end of a week in which tech giants’ legal liability for online content was put to the test. Meta and Google were found. responsible in a social media addiction trial in Los Angeles on Wednesday, and Meta was found liable in a online child safety test in New Mexico on Tuesday.
Both lawsuits were considered landmark lawsuits that could become decisive moments in the way free speech online is regulated in the United States. Currently, under Section 230 of the Communications Decency Act, big tech giants like Google that operate these online platforms are exempt from any liability for content posted by third parties. With this week’s rulings against Meta and Google, the protection the tech giants receive from Section 230 is now seriously called into question.
The applicability of Section 230 to AI has been a topic of debate. containment. Senator Ron Wyden, who helped draft the law, said Gizmodo in January that AI chatbots are not protected by it.
The Justice Department and Google did not immediately respond to Gizmodo’s request for comment.





