Washington:
Her voice tinged with anger, an American mother worried about what the future holds for her teenage daughter, just one of dozens of girls targeted in yet another AI-enabled pornography scandal that has rocked a US school.
The controversy that engulfed the Lancaster Country Day School in Pennsylvania last year highlights a new normal for pupils and educators struggling to keep up with a boom in cheap, easily available artificial intelligence tools that have facilitated hyperrealistic deepfakes.Â
One parent, who spoke to AFP on the condition of anonymity, said her 14-year-old daughter came to her “hysterically crying” last summer after finding AI-generated nude pictures of her circulating among her peers.
“What are the ramifications to her long term?” the mother said, voicing fears that the manipulated images could resurface when her daughter applies to college, starts dating, or enters the job market.
“You can’t tell that they are fake.”
Multiple charges — including sexual abuse of children and possession of child pornography — were filed last month against two teenage boys who authorities allege created the images.
Investigators uncovered 347 images and videos affecting a total of 60 victims, most of them female students at the private school, on the messaging app Discord.
All but one was younger than 18.
‘Troubling’
The scandal is the latest in a wave of similar incidents in schools across US states — from California to New Jersey — leading to a warning from the FBI last year that such child sexual abuse material, including realistic AI-generated images, was illegal.
“The rise of generative AI has collided with a long-standing problem in schools: the act of sharing non-consensual intimate imagery,” said Alexandra Reeve Givens, chief executive of the nonprofit Center for Democracy & Technology (CDT).
“In the digital age, kids desperately need support to navigate tech-enabled harassment.”
A CDT survey of public schools last September found that 15 percent of students and 11 percent of teachers knew of at least one “deepfake that depicts an individual associated with their school in a sexually explicit or intimate manner.”
Such non-consensual imagery can lead to harassment, bullying or blackmail, sometimes causing devastating mental health consequences.
The mother who spoke to AFP said she knows of victims who had avoided school, had trouble eating or required medical attention and counseling to cope with the ordeal.
She said she and other parents brought into a detective’s office to scrutinize the deepfakes were shocked to find printed out images stacked a “foot and a half” high.
“I had to see pictures of my daughter,” she said.
“If someone looked, they would think it’s real, so that’s even more damaging.”
‘Exploitation’
The alleged perpetrators, whose names have not been released, are accused of lifting pictures from social media, altering them using an AI application and sharing them on Discord.
The mother told AFP the fakes of her daughter were primarily altered from public photos on the school’s Instagram page as well as a screenshot of a FaceTime call.
A simple online search throws up dozens of apps and websites that allow users to create “deepnudes,” digitally removing clothing, or superimpose selected faces onto pornographic images.
“Although results may not be as realistic or compelling as a professional rendition, these services mean that no technical skills are needed to produce deepfake content,” Roberta Duffield, director of intelligence at Blackbird.AI, told AFP.
Only a handful of US states have passed laws to deal with sexually explicit deepfakes, including Pennsylvania at the end of last year.Â
The top leadership at the Pennsylvania school stepped aside after parents of the victims filed a lawsuit accusing the administration of failing to report the activity when they were first alerted to it in late 2023.
Researchers say schools are ill-equipped to tackle the threat of AI technology evolving at a rapid pace, in part because the law is still playing catchup.
“Underage girls are increasingly subject to deepfake exploitation from their friends, colleagues, school classmates,” said Duffield.
“Education authorities must urgently develop clear, comprehensive policies regarding the use of AI and digital technologies.”
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)