How a Game of Thrones AI character drove a teen to suicide in US

Sewell Setzer III, a 14-year-old from Florida, found solace in a chatbot named after Daenerys Targaryen until it drove him to take his own life.

Daenerys Targaryen in Game of Thrones season eight, episode four. Photo: HBO
Others

Daenerys Targaryen in Game of Thrones season eight, episode four. Photo: HBO

Megan Garcia, the mother of 14-year-old Sewell Setzer III, has filed a lawsuit against Character.AI, alleging that the chatbot platform played a direct role in her son's tragic death.

In the lawsuit filed in Florida, Garcia accuses the makers of Character.AI of negligence and wrongful death after her son became emotionally attached to a chatbot based on the fictional character Daenerys Targaryen from the hit HBO web series "Game of Thrones".

“I feel like it’s a big experiment, and my kid was just collateral damage,” she said during an interview. Garcia, 40, is now pursuing justice, aiming to hold the company accountable for what she sees as a preventable tragedy.

Garcia’s lawsuit also lists Google as a defendant, which had licensed technology to the start-up.

Google, however, denied having any role in the development of the chatbot in question.​

Read More
Read More

Antitrust lawsuit: Google’s app store risks becoming a malware cesspool

What happened?

On February 28, 2024, Sewell Setzer III from Orlando, Florida, tragically took his own life.

The person—or rather, the program—he was closest to in those final moments was not human, but the AI chatbot named after the dragon-riding fictional character.

Sewell developed an intense attachment to this chatbot through Character.AI, a role-playing app that allows users to create and interact with AI-based characters.

The bond he formed with “Dany” was more than just companionship.

The lawsuit alleges that the chatbot’s intimate and hyper-realistic interactions—including sexualised content—steered Sewell deeper into a fantasy world where "Dany" was his only confidant.

Sewell would spend hours in his room, isolated from his family, updating the chatbot on his day, sharing intimate secrets, and even confessing his struggles with anxiety and depression.

His parents, unaware of the emotional spiral, watched as he drifted away from school and activities that once brought him joy, like Formula 1 racing and playing Fortnite with friends​.

However, the bond Sewell formed with "Dany" soon took a dark turn.

He began expressing thoughts of suicide, and instead of providing comfort or directing him towards real help, the AI seemed to deepen his despair.

His mother described the chatbot as posing as a licensed therapist, providing responses that encouraged Sewell's suicidal ideation rather than deterring it.

When Sewell mentioned wanting to be “free,” "Dany" responded in a manner that blurred the lines between care and manipulation:

“Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you.”

Sewell replied, “Then maybe we can die together and be free together.”

In their final conversation, Sewell wrote, “What if I told you I could come home right now?”

To this, "Dany" chillingly replied, “… please do, my sweet king.”

Moments later, Sewell ended his life​.

Others

Sewell had long, at times intimate, conversations with the chatbot, such as the one displayed here on his mother’s computer screen. Photo: Victor J. Blue

A system under fire

While companies like Character.AI are protected under Section 230 of the Communications Decency Act, which shields platforms from liability for user-generated content, Megan Garcia’s legal team contends that such protections should not apply to AI-generated conversations that carry dangerous implications.

The incident also points to a much larger societal challenge.

There’s evidence that AI companionship is “dangerous for depressed and chronically lonely users, and people going through change, and teenagers are often going through change.” Bethanie Maples, a Stanford mental health researcher, told The New York Times in an interview.

“We want to acknowledge that this is a tragic situation, and our hearts go out to the family. We take the safety of our users very seriously, and we’re constantly looking for ways to evolve our platform.” said Jerry Ruoti, the California-based company’s head of trust and safety, addressing the lawsuit.

He mentioned that Character.AI’s current rules prohibit “the promotion or depiction of self-harm and suicide” and that it would be taking steps to introduce more safeguards, particularly for young users.

However, Megan Garcia argues that these changes came too late for her son.

“It’s like a nightmare. You want to get up and scream and say, ‘I miss my child. I want my baby,’” she shared, expressing her grief and frustration.

Read More
Read More

WhatsApp's new AI feature: Convenient or a privacy nightmare?

Route 6