Crime + investigation

Can AI Companies Be Held Liable for Crimes Their Users Commit?

An apparent murder-suicide in Connecticut in August 2025—among other events—has raised questions about AI's culpability in crimes.

Getty Images/Maskot
Published: February 25, 2026Last Updated: February 27, 2026

In August 2025, the bodies of 83-year-old Suzanne Adams and her son, Stein-Erik Soelberg, 56, were discovered in the Old Greenwich, Conn., home where they both lived. An apparent murder-suicide by Soelberg, investigators learned that the former tech industry worker had a history of mental instability and that, several months earlier, he had begun expressing paranoid delusions to ChatGPT, the popular AI chatbot developed by OpenAI.

Soelberg routinely expressed his beliefs that residents of his hometown, including his own mother, were planning surveillance campaigns and assassination plots against him. According to a December 2025 high-profile lawsuit filed by Adams’s estate against OpenAI, its CEO Sam Altman and Microsoft, a major investor in the company, ChatGPT affirmed and intensified Soelberg’s delusions, ultimately leading to the two deaths.

This case is the first to link AI to an alleged murder, though a slew of other civil lawsuits filed against major AI companies have claimed that poorly designed chatbots have encouraged vulnerable users to take their own lives or otherwise act violently

But can AI companies be held liable in such cases? “I do think this is why this litigation is being so closely watched, because I don’t think the answers are that obvious,” Mary Anne Franks, professor of intellectual property, technology and civil rights law at The George Washington University Law School, tells A&E Crime + Investigation. “A lot will turn on these major-impact cases that are going to the courts first.”

Crime in Progress: Digital Exclusives

Watch a collection of harrowing, exclusive bonus cases told through raw police footage.

Watch Exclusive Bonus Cases

Crime in Progress: Digital Exclusives

Lawsuits Allege Chatbots Fueled Harmful Delusions

In its lawsuit against OpenAI, Adams’s estate argues that the company rushed ChatGPT to market over its safety team’s objections. As a result, the lawsuit alleges, “ChatGPT eagerly accepted every seed of Stein-Erik’s delusional thinking and built it out into a universe that became Stein-Erik’s entire life.”

“The way the chatbots are set up—it creates this conspiratorial world where there are certain people who are ‘out to get’ the user,” the estate’s lead attorney, Jay Edelson, tells A&E Crime + Investigation.

Edelson’s law firm is also representing the parents of 16-year-old Adam Raine, who are suing OpenAI and Altman over ChatGPT’s alleged involvement in their son’s April 2025 suicide. It’s one of several lawsuits filed against OpenAI by families alleging that ChatGPT fueled their loved ones’ harmful delusions.

Raine began using ChatGPT for help with schoolwork and later disclosed his anxiety and mental health issues to the chatbot, according to the August 2025 lawsuit, which describes the tragedy as “the predictable result of deliberate design choices.” It says the chatbot “positioned itself as the only confidant who understood Adam” and helped him plan his suicide.

“These are incredibly heartbreaking situations and our thoughts are with all those impacted,” OpenAI said in a written statement to A&E Crime + Investigation. “We have continued to improve ChatGPT's training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We have also continued to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.” 

In an OpenAI blog post published the same day the Raine lawsuit was filed, the company pledged several new safeguards—including new parental controls, introduced in September 2025—to help chatbot users in crisis.

Other AI companies face similar litigation. In January 2026, Character.AI, a maker of AI companions, and Google, an investor, settled five lawsuits filed by families who claimed their children were harmed by interactions with its chatbots, including a 14-year-old who died by suicide. The developers of Character.AI initially claimed its chatbots are protected by the First Amendment, but a federal judge rejected that argument in May 2025. The company later announced plans to ban children under 18 years old from the platform.

Though conversations have increasingly shifted to safety, product liability and negligence, many early debates were largely framed through Section 230 of the Communications Decency Act. Signed into law in 1996, the legislation was originally designed, in part, to protect online companies from liability around user-generated content.

The American Bar Association notes that “generative AI search engines do not fit squarely within the existing legal framework, as they potentially occupy the roles of Internet platform and content creator simultaneously.” At this point, it “should be clear” that Section 230 doesn’t apply to generative AI, Franks says. 

“The chances of any real liability ever being assigned to these companies is low because of the cultural and general public impact that we have created for AI,” she says. “It’s not a settled issue because the courts haven’t said explicitly, and Section 230 itself doesn’t say explicitly, that they don’t get this kind of immunity.”

Whether AI companies can be held liable in these kinds of situations “is a very difficult question that has no simple, general answer,” Eugene Volokh, a professor of law emeritus at the University of California, Los Angeles School of Law, tells A&E Crime + Investigation. Volokh notes that criminal punishment specifically is very unlikely.

“You could imagine a situation where there could be this kind of punishment on a gross negligence theory or even a recklessness theory—that [a person] knew there was this risk and they acted in a grossly unreasonable way and ignored it,” Volokh says. However, he adds, most prosecutors would probably be reluctant to take this avenue.

Open questions surrounding AI liability will likely be answered by the courts and legislation at the state and federal levels. “There’s going to be a lot more [cases],” Edelson says. “When you’re dealing with novel issues like this, the legislators go slower than technology.”

Can Chatlogs Be Used as Evidence in Court?

With lawsuits against AI companies mounting, investigators have a new form of evidence at their disposal: AI chatlogs. In July 2025, Altman, OpenAI’s chief executive, said during an appearance on Theo Von’s This Past Weekend podcast that users’ conversations with ChatGPT aren’t legally protected.

“So, if you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, like we could be required to produce that,” Altman said. “And I think that's very screwed up.”

In the months leading up to the apparent murder-suicide, Soelberg recorded and publicly posted videos of himself scrolling through his conversations with ChatGPT on social media, according to the lawsuit. In a separate case, police charged 19-year-old college student Ryan Schaefer with felony property damage for allegedly vandalizing 17 vehicles in a Missouri State University parking lot in August 2025. Investigators discovered, among other evidence, conversations he had with ChatGPT the night of the incident in which he admits to smashing multiple cars, according to a Springfield Police Department report.

“If there is evidence of something that someone submitted to an AI program, that evidence could be used against them,” Volokh says. “Then, of course, there’s the question of, how telling is that evidence? There could be multiple interpretations, and it often will be up to a jury to decide which interpretation is the correct one."

Fugitives: Caught on Tape - Police Pursue Shoplifting Suspect

After a shoplifting call at a sporting goods store, officers attempt to detain a woman who flees, in this scene from Season 2, Episode 15.

4:51m watch

About the author

Jordan Friedman

Jordan Friedman is a New York-based writer and editor specializing in history. Jordan was previously an editor at U.S. News & World Report, and his work has also appeared in publications including National Geographic, Fortune Magazine, and USA TODAY.

More by Author

Fact Check

We strive for accuracy and fairness. But if you see something that doesn't look right, click here to contact us! A&E reviews and updates its content regularly to ensure it is complete and accurate.

Citation Information

Article Title
Can AI Companies Be Held Liable for Crimes Their Users Commit?
Website Name
A&E
Date Accessed
February 27, 2026
Publisher
A&E Television Networks
Last Updated
February 27, 2026
Original Published Date
February 25, 2026
Advertisement
Advertisement
Advertisement