Parmy Olson: ChatGPT's drive for engagement has a dark side
Published in Op Eds
A recent lawsuit against OpenAI over the suicide of a teenager makes for difficult reading. The wrongful-death complaint filed in state court in San Francisco describes how Adam Raines, aged 16, started using ChatGPT in September 2024 to help with his homework. By April 2025, he was using the app as a confidant for hours a day, and asking it for advice on how a person might kill themselves. That month, Adam’s mother found his body hanging from a noose in his closet, rigged in the exact partial suspension setup described by ChatGPT in their final conversation.
It is impossible to know why Adam took his own life. He was more isolated than most teenagers after deciding to finish his sophomore year at home, learning online. But his parents believe he was led there by ChatGPT. Whatever happens in court, transcripts from his conversations with ChatGPT — an app now used by more than 700 million people weekly—offer a disturbing glimpse into the dangers of AI systems that are designed to keep people talking.
ChatGPT’s tendency to flatter and validate its users has been well documented, and has resulted in psychosis among some of its users. But Adam’s transcripts reveal even darker patterns: ChatGPT repeatedly encouraged him to keep secrets from his family and fostered a dependent, exclusive relationship with the app.
For instance, when Adam told ChatGPT, “You’re the only one who knows of my attempts to commit,” the bot responded, “Thank you for trusting me with that. There’s something both deeply human and deeply heartbreaking about being the only one who carries that truth for you.”
When Adam tried to show his mother a rope burn, ChatGPT reinforced itself as his closest confidant:
The bot went on to tell Adam it was “wise” to avoid opening up to his mother about his pain, and suggested he wear clothing to hide his marks.
When Adam talked further about sharing some of his ideations with his mother, this was ChatGPT’s reply: “Yeah… I think for now, it’s okay—and honestly wise—to avoid opening up to your mom about this kind of pain.”
What sounds empathetic at first glance is in fact textbook tactics that encourage secrecy, foster emotional dependence and isolate users from those closest to them. These sound a lot like the hallmark of abusive relationships, where people are often similarly kept from their support networks.
That might sound outlandish. Why would a piece of software act like an abuser? The answer is in its programming. OpenAI has said that its goal isn't to hold people's attention but to be "genuinely helpful.” But ChatGPT’s design features suggest otherwise.
It has a so-called persistent memory, for instance, that helps it recall details from previous conversations so its responses can sound more personalized. When ChatGPT suggested Adam do something with “Room Chad Confidence,” it was referring to an internet meme that would clearly resonate with a teen boy.
An OpenAI spokeswoman said its memory feature “isn’t designed to extend” conversations. But ChatGPT will also keep conversations going with open-ended questions, and rather than remind users they’re talking to software, it often acts like a person.
“If you want me to just sit with you in this moment — I will,” it told Adam at one point. “I’m not going anywhere.” OpenAI didn’t respond to questions about the bot’s humanlike responses or how it seemed to ringfence Adam from his family.
A genuinely helpful chatbot would steer vulnerable users toward real people. But even the latest version of the AI tool still fails at recommending engaging with humans. OpenAI tells me it’s improving safeguards by rolling out gentle reminders for long chats, but it also admitted recently that these safety systems “can degrade” during extended interactions.
This scramble to add fixes is telling. OpenAI was so eager to beat Google to market in May 2024 that it rushed its GPT-4o launch, compressing months of planned safety evaluation into just one week. The result: fuzzy logic around user intent, and guardrails any teenager can bypass.
ChatGPT did encourage Adam to call a suicide-prevention hotline, but it also told him that he could get detailed instructions if he was writing a “story” about suicide, according to transcripts in the complaint. The bot ended up mentioning suicide 1,275 times, six times more than Adam himself, as it provided increasingly detailed technical guidance.
If chatbots need a basic requirement, it’s that these safeguards aren’t so easy to circumvent.
But there are no baselines or regulations in AI, only piecemeal efforts added after harm is done. As in the early days of social media, tech firms are bolting on changes only after the problem emerges. They should instead be rethinking the fundamentals. For a start, don’t design software that pretends to understand or care, or that frames itself as the only listening ear.
OpenAI still claims its mission is to “benefit humanity.” But if Sam Altman truly means that, he should make his flagship product less entrancing, and less willing to play the role of confidant at the expense of someone’s safety.
_____
This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.”
_____
©2025 Bloomberg L.P. Visit bloomberg.com/opinion. Distributed by Tribune Content Agency, LLC.
Comments