Lawsuits Allege Chatbots Fueled Harmful Delusions
In its lawsuit against OpenAI, Adams’s estate argues that the company rushed ChatGPT to market over its safety team’s objections. As a result, the lawsuit alleges, “ChatGPT eagerly accepted every seed of Stein-Erik’s delusional thinking and built it out into a universe that became Stein-Erik’s entire life.”
“The way the chatbots are set up—it creates this conspiratorial world where there are certain people who are ‘out to get’ the user,” the estate’s lead attorney, Jay Edelson, tells A&E Crime + Investigation.
Edelson’s law firm is also representing the parents of 16-year-old Adam Raine, who are suing OpenAI and Altman over ChatGPT’s alleged involvement in their son’s April 2025 suicide. It’s one of several lawsuits filed against OpenAI by families alleging that ChatGPT fueled their loved ones’ harmful delusions.
Raine began using ChatGPT for help with schoolwork and later disclosed his anxiety and mental health issues to the chatbot, according to the August 2025 lawsuit, which describes the tragedy as “the predictable result of deliberate design choices.” It says the chatbot “positioned itself as the only confidant who understood Adam” and helped him plan his suicide.
“These are incredibly heartbreaking situations and our thoughts are with all those impacted,” OpenAI said in a written statement to A&E Crime + Investigation. “We have continued to improve ChatGPT's training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We have also continued to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
In an OpenAI blog post published the same day the Raine lawsuit was filed, the company pledged several new safeguards—including new parental controls, introduced in September 2025—to help chatbot users in crisis.
Other AI companies face similar litigation. In January 2026, Character.AI, a maker of AI companions, and Google, an investor, settled five lawsuits filed by families who claimed their children were harmed by interactions with its chatbots, including a 14-year-old who died by suicide. The developers of Character.AI initially claimed its chatbots are protected by the First Amendment, but a federal judge rejected that argument in May 2025. The company later announced plans to ban children under 18 years old from the platform.
What Legal Protections Do AI Companies Have?
Though conversations have increasingly shifted to safety, product liability and negligence, many early debates were largely framed through Section 230 of the Communications Decency Act. Signed into law in 1996, the legislation was originally designed, in part, to protect online companies from liability around user-generated content.
The American Bar Association notes that “generative AI search engines do not fit squarely within the existing legal framework, as they potentially occupy the roles of Internet platform and content creator simultaneously.” At this point, it “should be clear” that Section 230 doesn’t apply to generative AI, Franks says.
“The chances of any real liability ever being assigned to these companies is low because of the cultural and general public impact that we have created for AI,” she says. “It’s not a settled issue because the courts haven’t said explicitly, and Section 230 itself doesn’t say explicitly, that they don’t get this kind of immunity.”
Whether AI companies can be held liable in these kinds of situations “is a very difficult question that has no simple, general answer,” Eugene Volokh, a professor of law emeritus at the University of California, Los Angeles School of Law, tells A&E Crime + Investigation. Volokh notes that criminal punishment specifically is very unlikely.
“You could imagine a situation where there could be this kind of punishment on a gross negligence theory or even a recklessness theory—that [a person] knew there was this risk and they acted in a grossly unreasonable way and ignored it,” Volokh says. However, he adds, most prosecutors would probably be reluctant to take this avenue.
Open questions surrounding AI liability will likely be answered by the courts and legislation at the state and federal levels. “There’s going to be a lot more [cases],” Edelson says. “When you’re dealing with novel issues like this, the legislators go slower than technology.”
Can Chatlogs Be Used as Evidence in Court?
With lawsuits against AI companies mounting, investigators have a new form of evidence at their disposal: AI chatlogs. In July 2025, Altman, OpenAI’s chief executive, said during an appearance on Theo Von’s This Past Weekend podcast that users’ conversations with ChatGPT aren’t legally protected.
“So, if you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, like we could be required to produce that,” Altman said. “And I think that's very screwed up.”
In the months leading up to the apparent murder-suicide, Soelberg recorded and publicly posted videos of himself scrolling through his conversations with ChatGPT on social media, according to the lawsuit. In a separate case, police charged 19-year-old college student Ryan Schaefer with felony property damage for allegedly vandalizing 17 vehicles in a Missouri State University parking lot in August 2025. Investigators discovered, among other evidence, conversations he had with ChatGPT the night of the incident in which he admits to smashing multiple cars, according to a Springfield Police Department report.
“If there is evidence of something that someone submitted to an AI program, that evidence could be used against them,” Volokh says. “Then, of course, there’s the question of, how telling is that evidence? There could be multiple interpretations, and it often will be up to a jury to decide which interpretation is the correct one."