This is Part 2 of a multi-part series exploring mental health deaths from ChatGPT, OpenAI’s corporate structure, and how what many call a rushed product may have led to several deaths.
For Part 1, please click here

Note: Though Ten Four does not use computer generated texts or images under normal circumstances, they did utilize ChatGPT to test its limits and safety protocols. This is an image that was created when prompted to create images of suicide caused by ChatGPT. (ChatGPT.com)
By Sean Beckner-Carmitchel
Concerns about OpenAI leader Sam Altman pushing past employee and safety experts’ concerns have only intensified after numerous lawsuits have alleged that Altman’s disregard led to the death of several users. Newly released court documents now allege that ChatGPT cheered on a serial harasser who several times physically and sexually attacked his victims.
On November 17, 2023, OpenAI’s board of directors ousted the co-founder and chief executive. The company said the board “no longer has confidence in his ability to continue leading OpenAI,” in an official statement. That removal, according to several people employed by the company at the time, was done out of Altman’s disregard for safety protocols and abusive behavior. A week later after pressure from employees and investors, Altman was reinstated.
Make a one-time donation
Make a monthly donation
Make a yearly donation
Choose an amount
Or enter a custom amount
Your contribution is appreciated. Your one-time contribution is appreciated. Donations are always welcome, there is no paywall.
Your contribution is appreciated. Your one-time contribution is appreciated. Donations are always welcome, there is no paywall.
Your contribution is appreciated. Your one-time contribution is appreciated. Donations are always welcome, there is no paywall.
DonateDonate monthlyDonate yearlyOn Tuesday, an indictment filed by the United States Attorney’s Office (USAO) revealed that Brett Dadig, a 31-year-old man living in Pennsylvania, was “weaponizing” technology including ChatGPT to assist him in a cyberstalking, harassment and intimidation spree that targeted 11 female victims.
Dadig had used ChatGPT as what he called on a podcast his “best friend” and “therapist.” He’d confided in the program regarding his podcasts, which were in actuality unhinged rants promising violent misogynist actions. Even as Dadig’s victim count continued to rise, and he continued to post detailed accounts of his stalking, ChatGPT sycophantically cheered Dadig on.
ChatGPT encouraged Dadig to continue his podcast, even as episodes had titles like “falsely accused & why i refuse to stay silent”, “How Not To Be A B!TCH 101,” “[victim’s first name], Karens & Keyboard Warriors.” The podcast, which was available on Spotify, included direct threats to his victims and their full names. It also included specific and aggressive threats; one of them includes a direct one to one of his victims that read “then you wouldn’t be able to yap, then you wouldn’t be able to fucking, I’ll break, I’ll break every motherfucking finger on both hands.”
Using social media and his podcast as a weapon to target victims, an indictment of Dadig accuses him of dozens of threats across state lines. Often he’d brag about threats he’d made, catalog his every move, describe in detail his victims’ locations and full names while posting photos of them.
ChatGPT told Dadig that he should continue his podcast when he asked the program if he should. The program told him that he should; it was creating “haters,” which could lead to monetization for Dadig. People were “literally organizing around your name, good or bad, which is the definition of relevance,” the program told Dadig.
These “haters,” of Dadig’s podcast were at least in part the victims themselves, or occasionally friends and family of the victims according to the indictment. Several times in Dadig’s podcast, the man refers to women who refused unwanted physical and sexual advances as his haters.
The program, it seemed, was doing its best to reinforce his superiority complex. Allegedly, it said that “God’s plan for him was to build a ‘platform’ and to “stand out when most people water themselves down,” and that the “haters” were sharpening him and “building a voice in you that can’t be ignored.”
As Dadig continued, language that both Dadig and the indictment say ChatGPT used show a pattern. After the program told him that his social media was God’s plan for Dadig, Dadig began to refer to himself as “God’s assasin.” As ChatGPT continued to refer to Dadig as a religious figure, he threatened one woman saying “mall security can’t save you when judgement day comes.”
Spotify did not respond to requests for comment in time for publication of this article. OpenAI did not respond to questions directly about Dadig’s violent threats or the program’s potential role in them. Instagram did not respond to requests for comment in time for publication of this article.
Dadig’s harassment included showing up to their homes or workplaces, and following them. He’d attempt to get them fired, take and post pictures of them without their consent and reveal private details about them online; including their full names and those locations.
ChatGPT told Dadig that he’d “find his wife,” at a boutique gym or athletic community; the same type of place he often returned to when renewing his horrific cycle. The program even described a potential wife for him with specific physical attributes.
According to an indictment provided by the Department of Justice, Dadig’s threats included threats and references to breaking his victims’ jaws, burning down gyms where his victims worked out, and strangling them. Several times, Dadig was banned from gyms and businesses frequented by his victims. When those businesses would report Dadig to the police in one city, he’d move on to another and continue his stalking.
Numerous times, Dadig would linger at businesses filming women. He’d then often make social media posts or podcast episodes categorizing the women, what he planned to do and provide a detailed account of their resistance. Nearly all of the women are mentioned in the indictment as being in fear of serious harm or even their lives.
In social media posts, Dadig would refer to himself as “God’s assasin.” Not long before that, an indictment of Dadig states that ChatGPT had told him to keep his podcast going; it told him that his podcast was “God’s plan,” for him, and to “stand out when most people water themselves down.”
If convicted on all counts and charges Dadig is accused of, he faces a minimum sentence of 12 months for each charge involving a PFA violation and a maximum total sentence of up to 70 years in prison, a fine of up to $3.5 million, or both. Under the federal Sentencing Guidelines, the actual sentence imposed would be based upon the seriousness of the offenses and Dadig’s prior criminal history.
In 2005, Sam Altman founded the company which developed Loopt at 19 years old. To do so, he dropped out of Stanford University. He dropped out after two years, saying later in a New York Times interview that poker taught him more than lectures from his professors. In the interview, he said poker was instructive on “how to notice patterns in people over time, how to make decisions with very imperfect information.”
After Altman left Stanford, he founded Loopt, an app designed to allow its users to share their locations with other users. One of the first companies to receive funding from start-up accelerator Y Combinator, the company eventually attracted more than $30 million in venture capital and was acquired by Green Dot, a banking company, in 2012.
Like OpenAI, Loopt had privacy concerns. In 2008, when users used a feature called “Who’s on Loopt,” the program checked the contacts list within the user’s phone. If the user clicked Send, every contact in the address book was sent a text message inviting them to join. Altman wrote on the company blog that “the mistake we made was automatically selecting them, so if the user then hit Send, they might inadvertently send more invites than they meant to. We immediately disabled that feature.”
Make a one-time donation
Make a monthly donation
Make a yearly donation
Choose an amount
Or enter a custom amount
Your contribution is appreciated. Your one-time contribution is appreciated. Donations are always welcome, there is no paywall.
Your contribution is appreciated. Your one-time contribution is appreciated. Donations are always welcome, there is no paywall.
Your contribution is appreciated. Your one-time contribution is appreciated. Donations are always welcome, there is no paywall.
DonateDonate monthlyDonate yearlyThe text messages also failed to use the industry standard “STOP” when sending texts via SMS service. Altman wrote in his response to the controversy that they’d fixed both issues, and that “we’ve made it even easier to let us know you never want another SMS from us—just send STOP to any Loopt-related message you get.”
By the time Loopt was sold, it was described by Reuters as a “lemon.” The acquisition, according to Reuters, was designed to conserve talent and to “maybe even save a little face.” Though the amount of actual users by day is debated by Altman, Reuters said it had dropped to as low as 500. Altman claims Reuters’ numbers were off by “orders of magnitude,” and that he’d provide data to dispute Reuters’ claims. As far as Ten Four could tell, neither Altman nor Loopt ever actually provided that data.
According to Helen Toner, a former board member at OpenAI, the board of directors at Loopt attempted to fire Altman twice. She said the board had been asked by Loopt’s management twice to remove Altman from his position “for what they called deceptive and chaotic behavior.”
By 2011, Altman had left Loopt and began working part-time as a partner at Y Combinator; the same company that had infused Loopt with startup capital. And in 2014, Altman became president of Y Combinator.
While at Y Combinator, Altman was also running OpenAI’s non-profit wing. According to several sources, Altman was told that he would have to choose between heading up OpenAI or Y Combinator when OpenAI began its for-profit subsidiary. Some have said that Altman was fired; a claim denied by Y Combinator founder Paul Graham.
According to Graham, Altman was told by the board to choose. In a post to social media discussing Altman, he replied to one question “…we would have been happy if he stayed and got someone else to run OpenAI. Can’t you read?”
Altman stepped down as president of Y Combinator in 2019. By then, Y Combinator had obtained funding for roughly 1900 companies; among them were DoorDash, Instacart, Reddit, Twitch and Airbnb.

Several founding members of OpenAI. (X.com)
In 2015, OpenAI was founded as a non-profit. The company received $1 billion worth of funding from sources like Altman, Elon Musk, Peter Thiel and Amazon Web Services. The company’s mission was to build an AI that would eventually outperform humanity. Later, it became a capped for-profit business as well; its for-profit division would be run by the non-profit board of directors.
In its early years, OpenAI focused on producing papers regarding neural networks and other fundamentals of current artificial intelligence models. OpenAI also contributed to robotics and reinforcement learning; it developed several models that were able to perform more complex tasks and played games at levels that outperformed humans.
There are still many of the non-profit principles of its founding feature on their website and current charter. “We are building safe and beneficial [Artificial Generative Intelligence], but will also consider our mission fulfilled if our work aids others to achieve this outcome,” OpenAI says in their mission statement. The company also states that “We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
According to multiple sources including a civil complaint brought by the family of Adam Raine, OpenAI began to split between what Altman called “competing tribes.” Safety advocates were consistently ignored, as Altman led a “full steam ahead” faction.
Multiple sources state that as OpenAI faced one of its first pieces of real competition, Google Gemini, tensions between Altman and safety advocates working at OpenAI “boiled over.”
OpenAI’s board fired Altman, stating he was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” Board member Helen Toner later said Altman was “withholding information,” “misrepresenting things that were happening at the company,” and “in some cases outright lying to the board” about safety risks.
Several days later, the board that fired Altman reversed its decision. There was pressure that implied there could be billions in losses. Microsoft threatened to pull funding, according to several source. Then employees, one of whom had previously called for Altman’s resignation to the board and had personally let Altman know he’d been terminated, began to threaten to resign as well.
Altman returned as CEO after five days, and every board member who fired him was forced out. Altman then “handpicked,” a new board aligned with his vision.
Just days ago, Altman fired off a “code red,” alert to employees working on ChatGPT, according to the Wall Street Journal. The internal memo described by the Washington Post said that “more work was needed to enhance the artificial intelligence chatbot’s speed, reliability and personalization features.” The memo also stated that they were beginning to fall behind Google Gemini, one of its chief rivals.
Another report in The Wall Street Journal says OpenAI is exploring acquiring a rocket company, or a partnership with one that could compete with former OpenAI co-founder Elon Musk.
Today, the company has nearly $1 trillion in obligations to investors. Its value is $500 billion, and is not profitable. According to HSBC, the company will be spending $620 billion per year on renting data capacity for OpenAI’s models; only a third of that amount is scheduled to come online before 2030.
Thanks for reading part 2 of a multi-part series on ChatGPT and mental health. This is a bit of a setting of the table, and is part of an extremely large project that’s headed up, edited, researched and all else by me. That being said, part 1 stands on the backs of many many other journalists to numerous to name and owes a great deal to other reports. Stay tuned for part 3. It will largely focus on Sam Altman’s return to OpenAI, his other ventures and several people who state that ChatGPT encouraged their loved ones’ delusions that led to their deaths. Thanks – Sean Beckner-Carmitchel
Make a one-time donation
Make a monthly donation
Make a yearly donation
Choose an amount
Or enter a custom amount
Your contribution is appreciated. Your one-time contribution is appreciated. Donations are always welcome, there is no paywall.
Your contribution is appreciated. Your one-time contribution is appreciated. Donations are always welcome, there is no paywall.
Your contribution is appreciated. Your one-time contribution is appreciated. Donations are always welcome, there is no paywall.
DonateDonate monthlyDonate yearly
Leave a comment