Part 2 of our AI-generated porn awareness series: (updated August 2025)
“In many ways, we have failed to prepare ourselves and our children for the advent of social media… But we have the chance to learn from history and set up future generations for success when it comes to AI.” – Dr. Caroline Leaf, Time.com
Unfortunately, though, the most recent news and stats prove we’ve already significantly failed to protect our teens from deepfake AI generated porn.
A 2024 study released by InternetMatters.org states: “Around half a million (529,632) teenagers in the UK, or 4 teenagers in a class of 30, have had an experience with a nude deepfake.”
AI-generated porn, in addition to any and all exposure to porn, is harming teens and big-tech is partially responsible.
Johan Steyn, a reported expert in AI thought leadership aptly says, “deepfake pornography is the new face of school bullying.”
Parents, educators, and youth workers must step up and take ownership, too.
Being aware of the problem is a critical first step.
Table of Contents

In the news: AI tools turn teens into perpetrators
Distressing news stories are starting to emerge as curious young minds experiment with AI technology.
Allowing AI tools to fall into the hands of young teens is like handing them a loaded gun or power tool without any safety settings, training, or accountability.
For example, in small-town Almendralejo, Spain, more than 20 girls aged 11-17 experienced image based sexual abuse via AI-generated nudes of them created and distributed online. The perpetrators were boys aged 12-14.
“The case has caused concern even for local people who are not involved.
‘Those of us who have kids are very worried,’ says Gema Lorenzo, a local woman who has a son, aged 16, and a daughter, aged 12.
‘You’re worried about two things: if you have a son, you worry he might have done something like this; and if you have a daughter, you’re even more worried because it’s an act of violence’. ” ~ BBC.com
Sadly, this isn’t an isolated incident. Young teens all around the globe are creating generative AI deepfake porn.
In another recent incident, one brave high schooler from Westfield High in New Jersey is taking action after AI-generated porn images of herself and more than a dozen other teens were distributed online to her peers.
And, again in Demopolis, Alabama,
“Tiffany Cannon, Elizabeth Smith, Holston Drinkard, and Heidi Nettles said they all learned on Dec. 4 that two of their daughters’ male classmates created and shared explicit photos of their daughters. …
“They’re scared, they’re angry, they’re embarrassed. They really feel like why did this happen to them,” said Smith.
The group of mothers said there is an active investigation with Demopolis Police.” – WBRC.com
How Big Tech perpetrates AI porn and endangers teens
Clearly AI tools are tempting teens to exploit others. Big Tech search engines like Google and machine learning projects are also exploiting teens by allowing teen-themed AI porn search terms and images. Such search terms should be banned because they are a form of child sexual exploitation.
A global child sexual abuse image crisis fueled by generative AI – deepfake images of children
While some call deepfake images of children and teens “porn,” it’s actually a heinous crime now actively perpetrated by criminals utilizing AI.
A 2023 news report reveals the grim facts: “The Internet Watch Foundation, which removes online child sexual abuse imagery, assessed a little over 11,000 AI-generated child sexual abuse material (AI CSAM) or images that were posted to a dark web forum in a one-month period. They found 2,978 of these images to be criminal in nature.” – interestingai.com
According to another recent article in The Washington Post, “Machine learning models can also spit out images depicting child abuse or rape and, because no one was harmed in the making, such content wouldn’t violate any laws, Citron said.
But the availability of those images may fuel real-life victimization.”
Across the aisle, Fox News agrees, stating, “…we need to update the federal legal definition of child sexual abuse material to include AI-generated depictions.
As the law currently stands, prosecutors must show harm to an actual child. But this requirement is out of step with today’s technology.”
Finally, Forbes writer Elijah Clark warns, “The grim reality is clear: if steps aren’t taken to contain this exploding crisis, we risk allowing artificial intelligence to become a tool for the mass exploitation of the world’s most vulnerable citizens — our children.”
The Take It Down Act: new legal protections against AI-generated deepfake porn
The 2025 Take It Down Act “requires websites and social media companies to remove deepfake content within 48 hours of notice from a victim.
Although the law makes it illegal to “knowingly publish” or threaten to publish intimate images without a person’s consent, including AI-created deepfakes, it does not target the tools used to create such AI-generated content.”
Legal loophole warning for parents: the “Take It Down Act” doesn’t cover nudifying apps.
According to the National Center on Sexual Exploitation, “A common way AI-generated IBSA (image based sexual abuse) is created is through “nudifying apps,” which allow a user to take innocuous images of women and “strip” them of clothing”.
Your teens have (likely) seen the ads.
Inside Meta /Facebook and Instagram as well as X and Telegram, thousands of ads have been running for nudifying apps.
“Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell University’s tech research center, has been studying the surge in AI deepfake networks marketing on social platforms for more than a year.
He told CBS News in a phone interview on Tuesday that he’d seen thousands more of these ads across Meta platforms, as well as on platforms such as X and Telegram, during that period,” reports a recent CBS news investigation.
While Meta was forced to issue a press release saying they will work harder to identify and take these ads down, Google and Apple have refused to respond to CBS.
Refusal to deal with nudifying apps in a responsible manner is a serious matter!
The City Attorney for San Francisco agrees.
The City of San Francisco is suing the top 16 AI nudify apps, according to City Attorney David Chiu: “While some of the websites allow users to only upload images of adults, Chiu said other sites allow users to create nonconsensual pornographic images of children.”
Prepare your teens – talk about nudifying apps.
Your teens have probably already heard the stories, but opening the conversation now could save their mental health.
For example, 15 year old Elliston Berry faced 9 months of humiliation at school because, “A classmate took a picture from Elliston’s Instagram, ran it through an artificial intelligence program that appeared to remove her dress and then sent around the digitally altered image on Snapchat…”
Parents need to be aware that an estimated 99% of deepfake images are of girls and women.
Elliston survived her crisis, partly because she found a listening, understanding ear and support from her parents.
Like Christian Stanley notes, empowered parents and open conversations are key to online safety.
5 tips for talking to teens (and younger) about the dangers of AI and deepfake porn
- AI is a powerful tool that can be used as a deadly weapon.
- Your mental health can be affected by AI deepfake images.
- Images you put on the internet can live forever, harm others, and can result in criminal sex offender charges depending on your state laws. Be clear! Pornographic content produced by minors on minor classmates using AI nudifying apps is criminal.
- Using AI properly as a tool requires proper training and adult supervision.
- Make a plan together for healthy use of technology – including a “mistake escape” plan.
Tips for having a “mistakes happen” conversation and promise
Almost weekly news stories now appear of teens committing suicide – the ultimate act of despair due to an online nude leak, a nude AI image, or a social media mistake.
- Talk to your teen or young adult about AI deepfakes and sextortion as soon as possible.
- Give them actionable steps ahead of time, including telling a safe person right away. Author Greta Eskridge says your young person should help you create this list of safe people other than yourself.
- Validate your teen’s strong emotions if they’ve already slipped up. (Check out the Mayo Clinic’s helpful pointers if your teen confides suicidal ideations.)
- Create safety and understanding by listening well. We’ve all made big mistakes.
Key tip: Instead of shaming your teen, promise them that you will love them and help them “no matter what – even if they make an online mistake!”
Share true stories like Joshua Broome’s, famous former porn star turned believer. There is a beautiful life possible even after online nudes.
Additionally, Dr. Caroline Leaf offers a simple plan for talking to your middle school and younger children about safely and wisely using AI technology.
Many teens today were exposed to porn young.
Defend Young Minds offers a very helpful, hopeful key phrase. You can tell your child if they’ve encountered porn: “Just because you saw a bad picture doesn’t make you a BAD person.”
This basic truth bears repeating: the porn industry is the enemy, not your child.
Help for victims of deepfake and AI-generated porn
For the victims of deepfake and AI-generated porn, we want you to know you matter. The injustice you’ve experienced isn’t fake! Be very kind to yourself or your teen who’s been harmed by AI porn.
Understandably, dealing with image-based sexual abuse is both challenging and traumatizing.
However, even if you feel powerless, there are a few immediate action steps you can take:
- The National Center on Sexual Exploitation has a legal team ready to help victims of online sexual abuse. (Please save any evidence immediately.)
- Another resource is Cyber Civil Rights online safety center’s list of attorneys.
- Fight the New Drug has compiled this very helpful list of ways to fight back, practice self-care, and heal.
- Ask Google to take down images and hold their feet to the fire.
- Search for a trauma-based therapist for long-term recovery help.
Please care for yourself kindly. Your life matters.
If you’re reading this and it’s a rough day, you can talk to someone for FREE. Text HOME to 741741.
How to use AI for prevention and protection from porn
What happens if your young teen son gets curious about a term or video someone used at school that day
He’s probably going to Google it, right??
Right now, due to the ALL the news about deepfake porn, it wouldn’t be surprising if AI porn pops into his search feed.
So, what if you had a tool that didn’t just notify you of your son’s search, but actually quickly blocked any pornographic images that might pop up?
One of our latest tools from Ever Accountable, an app called Bulldog Blocker uses the power of AI for good. It instantly blocks these images on Android devices, even inside social media apps.
If your teen is struggling with porn use, or you want to prevent any bad-habits from forming, this option offers an immediate break from porn viewing. You can learn more here:
At Ever Accountable, our passion is to empower people everywhere to quit all kinds of porn through the power of accountability.
We also care deeply about families whose teens are being exposed to porn daily.
AI-generated deepfake porn (and the entire porn industry) matters, because it’s actively harming our future generations all around the world.
14-Day Free Trial
Protection From Pornography
Change your habits, change your life: Start our 14-day free trial to help get rid of pornography for good.


Works Cited
Clark, Elijah. Forbes.com, https://www.forbes.com/sites/elijahclark/2023/10/31/pedophiles-using-ai-to-generate-child-sexual-abuse-imagery/?sh=63f111391656. Accessed 2 February 2024.
HARRIS, SKYLAR, and ARTEMIS MOSHTAGHIAN. “High schooler calls for AI regulations after manipulated pornographic images of her and others shared online.” KSL NewsRadio, 6 November 2023, https://kslnewsradio.com/2056412/high-schooler-calls-for-ai-regulations-after-manipulated-pornographic-images-of-her-and-others-shared-online/. Accessed 26 January 2024.
Hedgecoe, Guy. “AI-generated naked child images shock Spanish town of Almendralejo.” BBC, 23 September 2023, https://www.bbc.com/news/world-europe-66877718. Accessed 26 January 2024.
Huizar, Teresa. “Congress must stop a new AI tool used to exploit children.” Fox News, 3 January 2024, https://www.foxnews.com/opinion/congress-must-stop-new-ai-tool-exploit-children. Accessed 2 February 2024.
Hunter, Tatum. “AI porn raises flags over deepfakes, consent and harassment of women.” The Washington Post, 13 February 2023, https://www.washingtonpost.com/technology/2023/02/13/ai-porn-deepfakes-women-consent/. Accessed 26 January 2024.
https://news.ycombinator.com/item?id=34845370. Accessed 26 January 2024.
Milmo, Dan. “AI-created child sexual abuse images ‘threaten to overwhelm internet.’” The Guardian, 25 October 2023, https://www.theguardian.com/technology/2023/oct/25/ai-created-child-sexual-abuse-images-threaten-overwhelm-internet?ref=biztoc.com. Accessed 26 January 2024.
“Revenge Porn, deep fakes, AI, Boston University, Boston University Law School, Rebecca A. Delfino, artificial intelligence » Dome | Blog Archive.” Boston University, 10 August 2023, https://sites.bu.edu/dome/2023/08/10/revenge-porn-and-deep-fake-technology-the-latest-iteration-of-online-abuse/. Accessed 26 January 2024.
Sharma, Sejal. “AI is being used to generate child sexual abuse images, warns IWF.” Interesting Engineering, 25 October 2023, https://interestingengineering.com/culture/ai-used-to-generate-child-abuse-images. Accessed 26 January 2024.
14-Day Free Trial
Protection From Pornography
Change your habits, change your life: Start our 14-day free trial to help get rid of pornography for good.