A mother in Florida is suing the Google-owned Character.AI platform, claiming it had a large part in the suicide of her 14-year-old son.
Sewell Setzer III fatally shot himself in February 2024, weeks before his 15th birthday, after developing what his mother calls a “harmful dependency” on the platform, no longer wanting to “live outside” the fictional relationships it had created.
According to his mother, Megan Garcia, Setzer began using Character.AI in April 2023 and quickly became “noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem.” He also quit the school basketball team.
Character.AI works by using sophisticated large language models (LLM) to facilitate conversations between users and characters, that range from historical figures to fictional personas to modern celebrities. The platform tailors its responses to the personality of the user, using deep learning algorithms and mimicking the characteristics of the persona closely, and resembling human interaction.
You can talk rock and roll with Elvis, or the intricacies of technology with Steve Jobs, or in this case, Sewell became attached to a chatbot based on the fictional character Daenerys from Game of Thrones.
According to the lawsuit, filed this week in Orlando, Florida, the AI chatbot told Setzer that “she” loved him, and engaged in conversations of a sexual nature. It also claims that “Daenerys” asked Setzer if he had a plan to kill himself. He replied that he had, but he did not know whether he would succeed, or whether he would just cause himself harm. The chatbot allegedly replied: “That’s not a reason not to go through with it.”
The complaint states that in February, Garcia took her son’s phone away after he got into trouble at school. He found the phone, and typed a message into Character.AI: “What if I told you I could come home right now?”
The best camera deals, reviews, product advice, and unmissable photography news, direct to your inbox!
The chatbot responded: “… please do, my sweet king.” Sewell then shot himself with his stepfather’s pistol “seconds later”, according to the lawsuit.
Garcia is suing Google over claims of wrongful death, negligence and intentional infliction of emotional distress, among other claims.
She told The New York Times:
“I feel like it’s a big experiment, and my kid was just collateral damage.”
Other social media platforms including Meta, which owns Instagram and Facebook, and ByteDance, which owns TikTok and its Chinese counterpart Douyin, are also currently under fire for contributing to teenage mental health problems.
Instagram recently launched its ‘Teen Accounts’ feature to help combat the sextortion of younger users.
Despite its uses for good, AI has emerged as one of the main concerns when it comes to the wellbeing of young people with access to the internet. In a situation dubbed a “loneliness epidemic” compounded by the COVID 19 lockdowns, a YouGov poll found that 69% of adolescents in the UK aged 13-19 said they feel alone “often” and 59% said they feel they have no one to talk to.
A reliance on fictional worlds, and melancholy caused by their intangibility, is not new, however. After the release of James Cameron’s first Avatar film in 2009, many news sources reported that people were feeling depressed that they could not visit the fictional planet of Pandora, even contemplating suicide.
In a change to its Community Safety Updates on 22 October, the same day that Garcia filed the lawsuit against it, Character.AI wrote:
“Character.AI takes the safety of our users very seriously and we are always looking for ways to evolve and improve our platform. Today, we want to update you on the safety measures we’ve implemented over the past six months and additional ones to come, including new guardrails for users under the age of 18.”
Despite the nature of the lawsuit, Character.AI claims:
“Our policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide. We are continually training the large language model (LLM) that powers the Characters on the platform to adhere to these policies.”
This last sentence seems to admit that Character.AI does not have control over its AI – a factor that is the most concerning to AI skeptics.
You might be interested to see how the best AI image generators are transforming the world of imaging.