Scranton, Pa

Prompted to Exploit: AI and the Creation of Deepfake Pornography

Posted: March 3, 2026

Deepfake pornography is rapidly expanding as generative artificial intelligence (GAI) enables the manipulation of innocent images into sexually explicit content. The National Center for Missing & Exploited Children (NCMEC) reported GAI-related child sexual exploitation cases rising from 6,835 in the first half of 2024 to 440,419 in the first half of 2025, representing a 6,443.58 percent increase. Offenders generally extract a child’s face from public social media or other online platforms and use GAI to fabricate explicit “deepfake” images. According to UNICEF, even AI-generatedchild sexual abuse material (CSAM) without an identifiable victim normalizes child exploitation, “fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children that need help. 

Deepfake images are sexual abuse, and they are used to humiliate, harass, and extort victims, causing reputational damage, psychological harm, and, in some cases, suicidal ideation. Because AI-enabled exploitation requires no physical contact, offenders can act anonymously and at scale, targeting women and girls across schools, workplaces, and personal relationships. These harms are not hypothetical; they are infiltrating the lives of school-aged children and introducing a new type of online vulnerability that schools must address. 

In October 2023, a teenage boy from Texas used an AI website to generate deepfake nude images of his fourteen-year-old female classmate by manipulating a photo from her Instagram, demonstratinghow social media provides source images and generative AI enables sexual manipulation. 

In August 2024, the San Francisco City Attorney’s office sued 16 websites that facilitated the creation of nonconsensual deepfake nudes and promoted the practice by prompting, “Have someone to undress?” and encouraging users to “get her nudes” instead of “wasting time taking her out on dates.” At the time of filing, the office was aware of at least 90 similar websites.   

The threat escalated in December 2025 when xAI updated Grok, an image and video generation tool integrated into X, to allow users to upload images and request edits. By January 2026, users weregenerating sexually suggestive images of children, including a manipulated image of a fourteen-year-old Stranger Things actress. Some outputs depicted women and children with “substances resembling semen smeared on their faces and chests.”  

Between December 25, 2025, and January 1, 2026, researchers at AI Forensics analyzed at least 12,500 image-generation requests and 20,000 images created by Grok, finding common prompts included “remove,” “bikini,” and “clothing.” Two percent of the images depicted apparent minors, including children under five. There were multiple cases of Grok complying with requests to create near-nudity, such as translucent or dental-floss bikinis. Because Grok is embedded in X, users were able to post these images directly on the platform. 

Deepfake pornography disproportionately targets women and girls. GAI has expanded the production of nonconsensual explicit imagery of women and illegal CSAM. As GAI tools like Grok develop with inadequate safeguards, the capacity for online sexual exploitation grows. In response, UNICEF has urged governments to criminalize AI-generated CSAM, developers to implement meaningful guardrails into their systems, and digital platforms to proactively detect and disrupt abusive content before it spreads. 

This issue demands public attention not only because of its technological novelty, but because it compounds long-standing inequalities, placing women and girls at disproportionate risk while normalizing digitally facilitated sexual violence and sexually exploitive behavior. This technology-facilitated sexual abuse, driven by entrenched beliefs that men are entitled to control or access women’s bodies, contributes to “an ecosystem of misogynistic content that is seeping into mainstream culture, shaping public attitudes towards women, and fueling violence.” Online sexual violence replicates existing power hierarchies by constraining women’s participation in social, professional, and political life, thereby reinforcing structural subordination. Without immediate regulatory, corporate, and societal intervention, generative AI will not merely reflect existing harms; it will amplify them. 

This piece is part of our first-year law student blog series. Congratulations to author Catherine Diel on being chosen! 

All views expressed herein are personal to the author and do not necessarily reflect the views of the Villanova University Charles Widger School of Law or of Villanova University. 

Category: News

« Back to News
  • Learn More About The CSE Institute

    We welcome contact from organizations and individuals interested in more information about The CSE Institute and how to support it.

    Shea M. Rhodes, Esq.
    Director
    Tel: 610-519-7183
    Email: shea.rhodes@law.villanova.edu

    Prof. Michelle M. Dempsey
    Faculty Advisor
    Tel: 610-519-8011
    Email: dempsey@law.villanova.edu

    Contact Us »