- Reuters
- 38 Minutes ago

Teen’s suicide after AI sextortion highlights alarming rise in deepfake abuse
-
- Web Desk
- 5 Hours ago

WASHINGTON: The suicide of a 16-year-old Kentucky teenager has drawn urgent attention to the growing threat of AI-powered sextortion scams targeting minors. After Elijah Heacock took his own life earlier this year, his parents discovered he had received threatening messages demanding $3,000 in exchange for not sharing an apparently AI-generated nude image of him.
The tragic case underscores how the spread of so-called “nudify” apps, artificial intelligence tools that generate fake sexual images by digitally removing clothing, is fuelling a rise in online child exploitation. Such apps, once used mainly to target celebrities, are now increasingly being deployed against children.
Elijah’s parents said the threatening texts warned the image would be sent to his family and friends if he failed to pay. His father, John Burnett, told CBS News, “The people that are after our children are well organised, well funded and relentless. They don’t need real photos anymore, they just create them and use them to blackmail the child.”
The FBI has reported a sharp increase in sextortion cases involving American teens, especially boys aged 14 to 17. The agency has warned that these scams have already led to an alarming number of suicides across the United States.
A recent study by Thorn, a non-profit working to prevent online child abuse, revealed that 6 per cent of American teenagers have been directly targeted with deepfake nudes. The UK-based Internet Watch Foundation (IWF) also reported that financial sextortion using AI-generated sexual imagery is becoming more widespread. It said perpetrators now use convincing fake images to coerce victims, sometimes with effects just as damaging as if the images were real.
Read more: Dutch seizes 96mn euros from drug kingpin
The IWF also identified a disturbing online “guide” circulated among predators that promoted using AI nudifying tools to blackmail minors. Its author claimed to have successfully extorted girls as young as 13.
Research by US outlet Indicator found that the nudify industry is turning into a multimillion-dollar enterprise. An analysis of 85 such websites estimated their collective value could reach $36 million a year, with several of the top platforms making millions in just six months. Many of these sites reportedly use infrastructure provided by major tech firms like Google, Amazon and Cloudflare, helping them stay online despite ongoing crackdowns.
The abuse is spreading globally. In Spain, a Save the Children survey found one in five young people had been victimised by deepfake nudes, and Spanish prosecutors are currently investigating a case involving minors accused of using AI-generated pornography to target classmates and teachers. In the UK, it is now a criminal offence to create sexually explicit deepfakes, with violators facing up to two years in prison.
In the US, the newly signed “Take It Down Act” aims to protect victims by making the sharing of non-consensual intimate images a criminal offence and requiring platforms to remove such content. Tech giant Meta has also filed a lawsuit against a Hong Kong firm behind a nudify app accused of repeatedly violating its advertising policies.
Still, experts say efforts to rein in these tools are being outpaced. “To date, the fight against AI nudifiers has been a game of whack-a-mole,” Indicator noted, calling the developers and platforms behind these tools “persistent and malicious adversaries.”
Read next: Apple to launch AirTag 2 with longer range and stronger privacy
