At that point, the 14-year-old put down his phone and shot himself with his stepfather’s handgun.
Ms Garcia, 40, claimed that her son was just “collateral damage” in a “big experiment” being conducted by Character AI, which has 20 million users.
“It’s like a nightmare. You want to get up and scream and say, ‘I miss my child. I want my baby’,” she added.
Noam Shazeer, one of the founders of Character AI, claimed last year that the platform would be “super, super helpful to a lot of people who are lonely or depressed”.
Jerry Ruoti, the company’s safety head, told The New York Times that it would add extra safety features for its young users but declined to say how many were under the age of 18.
“This is a tragic situation, and our hearts go out to the family,” he said in a statement.
“We take the safety of our users very seriously, and we’re constantly looking for ways to evolve our platform.”
Mr Ruoti added that Character AI’s rules prohibited “the promotion or depiction of self-harm and suicide”.
Ms Garcia filed a lawsuit against the company this week, who she believes is responsible for her son’s death.
‘Dangerous and untested’
A draft of the complaint seen by the New York Times said the technology is “dangerous and untested” as it can “trick customers into handing over their most private thoughts and feelings”.
She said the company failed to provide “ordinary” or “reasonable” care with Setzer or other minors.
Character AI is not the only platform out there that people can use to develop relationships with fictional characters.
Some allow, or even encourage, unfiltered sexual chats, prompting users to chat with the “AI girl of your dreams”, while others have stricter safety features.
On Character AI, users can create chatbots to mimic their favourite celebrities or entertainment characters.
The growing prevalence of AI through bespoke apps and social media sites, such as Instagram and Snapchat, is quickly becoming a major concern for parents across the US.
Earlier this year, 12,000 parents signed a petition urging TikTok to clearly label AI-generated influencers who could pass as real people to their children.
TikTok requires all creators to label realistic AI content, However, ParentsTogether, an organisation focused on issues that affect children, argued that it was not consistent enough.
Shelby Knox, the campaign director of ParentsTogether, said children were watching videos of fake influencers promoting unrealistic beauty standards.
Last month, a report published by Common Sense Media found that while seven in 10 teenagers in the US have used generative AI tools, only 37 per cent of parents were aware that they were doing so.