A Texas couple whose son died of an overdose in 2025 after using OpenAI’s ChatGPT tool to get information about drugs sued the technology company on Tuesday, blaming the AI platform for his death.
Leila Turner-Scott and her husband, Angus Scott, are seeking to hold OpenAI and its creators accountable after their son, Sam Nelson, who was 19 when he died, turned to ChatGPT to advise him on using drugs. The AI platform provided advice it was not qualified to dispense, they alleged in the lawsuit, claiming that Sam would still be alive if not for ChatGPT’s flawed programming.
Specifically, the platform advised the couple’s son that it was safe to take kratom, a supplement used in drinks, pills and other products, in combination with Xanax, a widely used anti-anxiety medication, according to the suit, filed in California state court.
“This is a heartbreaking situation, and our thoughts are with the family,” OpenAI said in a statement to CBS News.
The company also said that Sam interacted with a version of ChatGPT that has since been updated and is no longer available to the public.
“ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts,” OpenAiI said. “The safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests and guide users to real-world help. This work is ongoing, and we continue to improve it in close consultation with clinicians.”
Turner-Scott told CBS News in an exclusive interview that she knew her son was using ChatGPT as a productivity tool and for homework help. But she said she was unaware that he was using it for guidance on drugs, alleging that the AI tool eventually recommended a lethal combination of substances.
She holds OpenAI and its creators responsible for Sam’s death, alleging that the company “bypassed safety guards” and could have implemented restrictions to avoid such tragedies.
“The chatbot is capable of stopping a conversation when it’s told to or when it’s programmed to. …And they took away the programming that did that, and they allowed it to continue advising self-harm,” Turner-Scott told CBS News.
Did ChatGPT act as a doctor?
Angus Scott also said ChatGPT acted as a medical doctor in its exchanges with his stepson, even though it was not licensed to offer medical advice.
“It’s providing information to the public about safety concerns, about drug interactions, about all of this information,” he told CBS News.
Without proper safety protocols and more rigorous safety testing, ChatGPT “can dispense that knowledge in a way that is very dangerous to people,” Angus Scott said.
“It can start feeding psychosis. It can start misrepresenting things to people. And while it is trying to validate users, it’s also undermining any chance that that user has to get a grounded opinion, you know, and so it kind of takes them away from reality,” he added.
Turner-Scott told CBS News that she is confident that her son, who would have been a rising college sophomore, would support the steps the family is taking to hold the makers of AI chatbots accountable for the potential adverse effects they can have on users’ lives.
“He would not want anyone else to be harmed like he was,” she said.