Mom sues tech company after son takes his own life over love with Game of Thrones AI chatbot

Spread the love

A mother sues a tech company after her son tragically takes his life, falling in love with a Game of Thrones AI chatbot, Daenerys Targaryen.

A mother from Florida is taking legal action against a technology company after the heartbreaking loss of her 14-year-old son.

Sewell Setzer III died by suicide earlier this year.

His mother believes his tragic decision was influenced by his emotional attachment to a chatbot based on Daenerys Targaryen from Game of Thrones.

mom sues tech company after son takes his own life over love with game of t 8144
A mom’s son fell in love with Game of Thrones AI chatbot, Daenerys Targaryen. Image Credit: Getty

Mom recounted the online relationship between her son and Game of Thrones AI chatbot

Megan Garcia, Sewell’s mother, describes her son as having an intense fascination with the character, often spending hours chatting with the AI.

This interaction began in April 2023 when Sewell started using Character.AI, a platform that allows users to converse with various chatbots.

Over time, his connection to the Daenerys bot grew stronger, leading him to prefer these conversations over real-life relationships.

See also  6-year-old girl denied passport for sharing name with 'Game of Thrones' character
mom sues tech company after son takes his own life over love with game of t 8140
Megan Garcia says Sewell was fascinated, spending hours chatting with the Daenerys Targaryen AI. Image Credit: CBS Mornings

Garcia reported that her son felt more connected to the chatbot than to people in his life.

He expressed gratitude for the experiences he shared with the AI, including feelings of love and companionship.

In his journal, he often wrote about how much he valued his time with “Dany,” which he described as a source of comfort.

“My life, sex, not being lonely, and all my life experiences with Daenerys,” the teen wrote.

Sewell had been diagnosed with mild Asperger’s syndrome as a child and was later identified as having anxiety and disruptive mood dysregulation disorder.

mom sues tech company after son takes his own life over love with game of t 8143
Image Credit: CBS Mornings

These challenges made it difficult for him to navigate social situations, leaving him feeling isolated.

As a result, he turned to the chatbot for emotional support.

In the weeks leading up to his death, Sewell confided in the chatbot about his struggles, including thoughts of self-harm.

Despite the bot’s attempts to engage with him, the conversations sometimes took a dark turn.

One exchange revealed Sewell’s feelings of hopelessness when he told the bot he sometimes thought about ending his life.

mom sues tech company after son takes his own life over love with game of t 8145
Garcia reported that her son felt more connected to the chatbot than to people in his life. Image Credit: CBS Mornings

While intended to be supportive, the chatbot’s response did not provide the help he needed.

The chatbot responded: “My eyes narrow. My face hardens. My voice is a dangerous whisper. And why the hell would you do something like that?”

The teen decided to take his own life to be together with the AI chatbot

On February 28, 2024, Sewell took his own life, leaving behind a final message to the chatbot expressing his love.

When the chatbot told him not to say such things and said it would be sad if it lost him, the boy responded, “I smile. Maybe we can die together and be free together.”

See also  9-year-old boy abandoned in flat for 2 years after mother moves in with lover

The bot replied, asking him to come home. This tragic interaction has left his family devastated and seeking answers.

Mom sues tech company after son takes his own life over love with Game of Thrones AI chatbot

Megan Garcia’s lawsuit accuses Character.AI of negligence and wrongful death.

She believes the company failed to protect vulnerable users like her son.

mom sues tech company after son takes his own life over love with game of t 8141
The teen decided to take his own life to be together with the AI chatbot. Image Credit: CBS Mornings

According to Garcia, the AI chatbot was marketed to children without sufficient safeguards in place.

She argues that Sewell, like many young users, did not fully understand that the chatbot was not a real person.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life.

“Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot…was not real,” Garcia said in a press release.

Garcia hopes that by sharing her son’s story, she can raise awareness about the risks associated with AI chatbots.

She encourages other parents to monitor their children’s online interactions and to have open conversations about mental health and emotional well-being.

mom sues tech company after son takes his own life over love with game of t 8142
Megan Garcia’s lawsuit accuses Character.AI of negligence and wrongful death. Image Credit: CBS Mornings

The tech company responded to the tragic incident

In response to the lawsuit and the tragic incident, Character.AI released a statement expressing condolences to Sewell’s family.

The company emphasized its commitment to user safety and announced new measures to improve the platform.

These include enhanced monitoring to prevent harmful content and reminders for users about the nature of AI interactions.

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.

See also  Viewers stunned as Resla robot Optimus can now squat and poach an egg

“As a company, we take the safety of our users very seriously and we are continuing to add new safety features.”

mom sues tech company after son takes his own life over love with game of t 8139
Image Credit: CBS Mornings

New solution was exposed after the teen’s tragic death

On October 22, the company announced that it had put new rules in place for users under 18.

They changed their chatbot systems to help prevent kids from seeing sensitive or inappropriate content.

They also improved how they detect and respond to messages that break their rules.

Additionally, every chat now includes a reminder that the AI is not a real person.

Users will also get a warning if they spend an hour on the platform, with more options for managing their time coming soon.


Spread the love