Harassment victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored his warnings


After months of conversations with ChatGPT, a 53-year-old Silicon Valley entrepreneur became convinced he had discovered a cure for sleep apnea and that powerful people were coming after him, according to a new lawsuit filed in California Superior Court in San Francisco County. He then allegedly used the tool to stalk and harass his ex-girlfriend.

Now the ex-girlfriend is suing OpenAI, alleging that the company’s technology allowed her to accelerate her harassment, TechCrunch has learned exclusively. She claims OpenAI ignored three separate warnings that the user posed a threat to others, including an internal flag classifying her account activity as related to weapons causing mass casualties.

The plaintiff, referred to as Jane Doe to protect her identity, is suing for punitive damages. He also filed a temporary restraining order on Friday asking the court to force OpenAI to lock the user’s account, prevent him from creating new ones, notify him if he tries to access ChatGPT, and preserve his complete chat logs for discovery.

OpenAI agreed to suspend the user’s account but refused to suspend the rest, according to Doe’s lawyers. They say the company is withholding information about specific plans to harm Doe and other potential victims that the user may have discussed with ChatGPT.

The lawsuit comes amid growing concern about the real-world risks of sycophantic AI systems. GPT-4o, the model cited in this and many other cases, was left ChatGPT in February.

The case is brought by Edelson PC, the firm behind wrongful death lawsuits involving a teenager. Adam Rainewho committed suicide after months of conversations with ChatGPT, and Jonathan Gavalaswhose family alleges that Google’s Gemini fueled his delusions and potential mass casualty event before his death. Senior attorney Jay Edelson warned that AI-induced psychosis is on the rise since individual harm towards mass casualty events.

That legal pressure now collides directly with OpenAI’s legislative strategy: the company is supporting an Illinois bill that would protect AI labs from liability even in cases involving mass deaths or catastrophic financial damage.

Technology event

San Francisco, CA
|
October 13-15, 2026

OpenAI did not respond in time for comment. TechCrunch will update the article if the company responds.

Jane Doe’s lawsuit lays out in detail how that responsibility unfolded for one woman over several months.

Last year, the ChatGPT user in the lawsuit (whose name is not included in the lawsuit to protect his identity) became convinced that he had invented a cure for sleep apnea after months of “sustained, high-volume use of GPT-4o.” When no one took his work seriously, ChatGPT told him that “powerful forces” were watching him, including using helicopters to monitor his activities, according to the complaint.

In July 2025, Jane Doe urged him to stop using ChatGPT and seek help from a mental health professional. Instead, he returned to ChatGPT, which assured him he had “level 10 sanity” and helped him double down on his delusions, according to the lawsuit.

Doe had broken up with the user in 2024 and used ChatGPT to process the breakup, according to emails and communications cited in the lawsuit. Instead of rejecting his one-sided version, she repeatedly presented him as rational and aggrieved, and her as manipulative and unstable. He then took these AI-generated conclusions off the screen and into the real world, using them to stalk and harass her. This manifested itself in several clinical-looking AI-generated psychological reports that he distributed to his family, friends, and employer.

Meanwhile, the user continued to spiral. In August 2025, OpenAI’s automated security system flagged him for “mass casualty weapons” activity and deactivated his account.

A member of the human safety team reviewed the account the next day and restored it, although his account may have contained evidence that he was targeting and harassing people, including Doe, in real life. For example, a September screenshot the user sent to Doe showed a list of conversation titles that included “violence list expansion” and “fetal asphyxia calculation.”

The decision to reinstate is notable after two recent school shootings in Tumbler Ridge, Canada, and at Florida State University (FSU). The OpenAI security team had flagged the Tumbler Ridge shooter as a potential threat, but higher-ups reportedly He decided not to alert the authorities. Florida Attorney General an investigation was opened this week about OpenAI’s possible link to the FSU shooter.

According to Jane Doe’s lawsuit, when OpenAI restored her harasser’s account, her Pro subscription was not restored along with it. He emailed the trust and safety team to fix it, copying Doe on the message.

In his emails, he wrote things like, “I NEED HELP VERY FAST PLEASE. PLEASE CALL ME!” and “this is a matter of life and death.” He claimed that he was “in the process of writing 215 scientific papers”, which he was writing so fast that he “didn’t even have time to read”. Those emails included a list of dozens of AI-generated “scientific articles” with titles such as: “Deconstructing Race as a Biological Category_Legal, Scientific and Horn of Africa Perspectives.pdf.txt.”

“The user’s communications provided unequivocal notice that he was mentally unstable and that ChatGPT was the driver of his delusional thinking and escalating behavior,” the lawsuit states. “The user’s stream of urgent, disorganized, and grandiose complaints, along with a concrete report generated by ChatGPT addressing the complainant by name and an extensive set of supposed ‘scientific’ materials, were unequivocal evidence of that reality. OpenAI did not intervene, restrict his access, or implement any safeguards. Instead, it allowed him to continue using the account and restored his full Pro access.”

Doe, who claims in the lawsuit that she lived in fear and could not sleep in her own home, submitted a Notice of Abuse to OpenAI in November.

“Over the past seven months, you have weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise,” Doe wrote in his letter to OpenAI asking the company to permanently ban the user’s account.

OpenAI responded, acknowledging that the report was “extremely serious and concerning” and that it was carefully reviewing the information. I never received a response.

Over the next few months, the user continued to harass Doe and send her a series of threatening voicemails. In January, he was arrested and charged with four felony counts of communicating bomb threats and assault with a deadly weapon. Doe’s lawyers argue that this validates warnings that both she and OpenAI’s own security systems had raised months earlier, warnings that the company allegedly chose to ignore.

The user was declared incompetent to stand trial and committed to a mental health facility, but a “procedural failure by the State” means he will soon be released to the public, according to Doe’s attorneys.

Edelson asked OpenAI to cooperate. “In every case, OpenAI has chosen to hide critical security information: from the public, from victims, from people its product actively endangers,” he said. “We ask you, for once, to do the right thing. Human lives must mean more than OpenAI’s race to an IPO.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *