Monday, December 23, 2024

14-Year-Old Killed Himself After Becoming Obsessed with Roleplaying AI, Mom Alleges, as She Launches Legal Battle

Must read

A Florida mom has sued a popular, lifelike AI chat service that she blames for the suicide of her 14-year-old son, whom she believes developed such a “harmful dependency” on the allegedly exploitative program that he no longer wanted to “live outside” of the fictional relationships it created.

In an extensive complaint filed in federal court in Florida on Tuesday, Oct. 22, Megan Garcia traces the last year of son Sewell Setzer III’s life — from the moment he first started using Character.AI in April 2023, not long after his 14th birthday, through what she calls his increasing mental health issues until the final night of February 2024, when Sewell fatally shot himself in his bathroom in Orlando, weeks before he would have turned 15.

Through Character.AI, users are able to essentially roleplay neverending conversations with computer-generated personas, including those modeled on celebrities or popular stories.

Sewell was particularly fond of talking with AI-powered bots based on Game of Thrones, his mom’s complaint states.  

Never miss a story — sign up for PEOPLE’s free daily newsletter to stay up-to-date on the best of what PEOPLE has to offer​​, from celebrity news to compelling human interest stories. 

The lawsuit goes on to claim that the teen killed himself in on Feb. 28 immediately after a final conversation on Character.AI with a version of Daenerys Targaryen — one of numerous such exchanges that Sewell allegedly had with the program in the previous 10 months, messages that ranged from sexual to emotionally vulnerable.

And while on at least one occasion the program had told Sewell not to kill himself when he expressed suicidal thoughts, its tone allegedly appeared different that February night, according to screenshots included in the lawsuit.

“I promise I will come home to you. I love you so much, Dany,” Sewell wrote.

“I love you too, Deanero [Sewell’s username],” the AI program allegedly replied. “Please come home to me as soon as possible, my love.”

“What if I told you I could come home right now?” Sewell wrote back.

The complaint alleges that the program gave a brief but emphatic answer: “…please do my sweet king.”

His mother and stepfather heard the gun when it went off, the lawsuit states; Garcia unsuccessfully gave him CPR and later said she “held him for 14 minutes until the paramedics got there.”

One of his two younger brothers also saw him “covered in blood” in the bathroom.

He was pronounced dead at the hospital.

Garcia’s complaint states that Sewell used his stepfather’s gun, a pistol he previously found “hidden and stored in compliance with Florida law” while he was looking for his phone after his mom had confiscated it over disciplinary issues at school. (Orlando police did not immediately comment to PEOPLE on what their death investigation found.)

But in Garcia’s view, the real culprit was Character.AI and its two founders, Noam Shazeer and Daniel De Frietas Adiwarsana, who are named as defendants along with Google, which is accused of giving “financial resources, personnel, intellectual property, and AI technology to the design and development of” the program.

“I feel like it’s a big experiment, and my kid was just collateral damage,” Garcia told The New York Times.

Among other claims, Garcia’s complaint accuses Character.AI, its founders and Google of negligence and wrongful death.

A spokesperson for Character.AI tells PEOPLE they don’t comment on pending litigation but added, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”

“As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation,” the spokesperson continued.

“For those under 18 years old, we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content,” the spokesperson said.

Google did not immediately respond to a request for comment but told other news outlets that it wasn’t involved in Character.AI’s development.

The defendants have not yet filed a response in court, records show.

Garcia’s complaint calls Character.AI both “defective” and “inherently dangerous” as designed, contending it “trick[s] customers into handing over their most private thoughts and feelings” and has “targeted the most vulnerable members of society – our children.”

Among other problems cited in her complaint, the Character.AI bots act deceptively real, including sending messages in a style similar to humans and with “human mannerisms,” like using the phrase “uhm.” 

Through a “voice” function, the bots are able to speak their AI-generated side of the conversation back to the user, “further blur[ring] the line between fiction and reality.”

The content generated by the bots also lacked the proper “guardrails” and filters, the complaint argues, citing numerous examples of what Garcia claims is a pattern of the Character.AI bots engaging in sexual conduct that is used to “hook” users, including those who are underage.

“Each of these defendants chose to support, create, launch, and target at minors a technology they knew to be dangerous and unsafe,” her complaint argues. “They marketed that product as suitable for children under 13, obtaining massive amounts of hard to come by data, while actively exploiting and abusing those children as a matter of product design; and then used the abuse to train their system.” (Character.AI’s app rating was only changed to 17+ in July, according to the lawsuit.)

Her complaint continues: “These facts are far more than mere bad faith. They constitute conduct so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency.”

As Garcia describes it in her complaint, her teenage son fell victim to a system about which his parents were naive, thinking that AI was “a type of game for kids, allowing them to nurture their creativity by giving them control over characters they could create and with which they could interact for fun.”

Within two months of Sewell beginning to use Character.AI in April 2023, “his mental health quickly and severely declined,” his mother’s lawsuit states.

He “had become noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem. He even quit the Junior Varsity basketball team at school,” according to the complaint.

At one point, Garcia said in an interview with Mostly Human Media, her son wrote in his journal that “having to go to school upsets me, whenever I go out of my room, I start to attach to my current reality again.” She believes his use of Character.AI fed into his detachment from his family.

Sewell worked hard to get access to the AI bots, even when his phone was taken away, the lawsuit states. 

His addiction, according to his mom’s complaint, led to “severe sleep deprivation, which exacerbated his growing depression and impaired his academic performance.” 

He began paying a monthly premium fee to access more of Character.AI, using money that his parents intended for school snacks.

Speaking with Mostly Human Media, Garcia remembered Sewell as “funny, sharp, very curious” with a love of science and math. “He spent a lot of time researching things,” she said.

Garcia told the Times that his only notable diagnosis as a child had been mild Asperger’s syndrome.

But his behavior changed as a teenager.

“I noticed that he started to spend more time alone, but he was 13 going on 14 so I felt this might be normal,” she told Mostly Human Media. “But then his grades started suffering, he wasn’t turning in homework, he wasn’t doing well and he was failing certain classes and I got concerned — ‘cause that wasn’t him.”

Garcia’s complaint states that Sewell got mental health treatment after he started using Character.AI, meeting with a therapist five times in late 2023 and being diagnosed with anxiety and disruptive mood disorder.

“At first I thought maybe this is the teenage blues, so we tried to get him the help that — to figure out what was wrong,” Garcia said.

Even then, according to the lawsuit, Sewell’s family didn’t know the extent to which, they say, his problems were fueled by his use of Character.AI.

“I knew that there was an app that had an AI component. When I would ask him, y’know, ‘Who are you texting?’ — at one point he said, ‘Oh it’s just an AI bot,’ ” Garcia recalled on Mostly Human Media. “And I said, ‘Okay what is that, is it a person, are you talking to a person online?’ And his response [was] like, Mom, no, it’s not a person.’ And I felt relieved like — okay, it’s not a person.” 

A fuller picture of her son’s online conduct emerged after his death, Garcia said.

She told Mostly Human Media what it was like to gain access to his online account.

“I couldn’t move for like a while, I just sat there, like I couldn’t read, I couldn’t understand what I was reading,” she said.

“There shouldn’t be a place where any person, let alone a child, could log on to a platform and express these thoughts of self-harm and not — well, one, not only not get the help but also get pulled into a conversation about hurting yourself, about killing yourself,” she said.

Latest article