Scranton, Pa

Student Blog Series: Deepfake Pornography: How AI Has Added to the Power Imbalances of the Porn Industry

Posted: March 26, 2024

As AI becomes more advanced, its potential to cause harm grows, as exhibited by the recent rise in deepfake pornography. Deepfakes are images, videos, or audio that have been altered to make it appear as if a specific person is the one being depicted or recorded. While these alterations can be purely for entertainment, such as audio manipulations of famous singers covering popular songs or swapping an actor’s face onto another famous movie character, they can also quickly turn exploitative.

Increasingly, people are using AI to create sexually explicit images of individuals without their consent, leading to an emerging deepfake pornography industry. As of 2019, deepfake pornography represented 96% of all deepfake videos online, with websites that advertise such videos receiving over 134 million views. Since 2019, the number of deepfake videos has continued to grow, increasing by 550% as of 2023. Specifically, one emerging trend is the popularization of celebrity deepfake pornography.

Recently, the dangers of deepfakes garnered global attention after an X user posted AI-generated sexually explicit images of Taylor Swift to the platform. As one of the wealthiest and most recognizable women in the world, these images quickly drew attention. Even with the power and influence that comes from being one of the world’s top celebrities, the images remained on the platform for over seventeen hours and gained hundreds of thousands of likes and reposts.

While the attack against Taylor Swift has been well reported in the news, similar instances have occurred in regards to countless other celebrities, with 94% of those depicted in deepfake pornography working in the entertainment industry. The platforms that create and host deepfake porn often operate on requests, charging money for “long-duration premium fake content” of celebrities and then monetizing those videos to earn revenue. Movie stars, musicians, Youtubers, and TikTok stars have all been subjected to deepfake pornography. One YouTuber, Gibi ASMR, who has over three million subscribers, stated that she has “given up trying to keep tabs on the deepfakes of her.” To address this problem, one company offered to remove the videos for her for about $640 a video, a cost that was steep for the YouTube star, let alone the average person.

Though the majority of deepfake content is targeted towards celebrities, many platforms are now accepting requests to make pornographic content of “personal girls” – people with under two million Instagram followers – for profit. One growing type of exploitative content is “stripped” images, which take a photo of a clothed individual and use AI to generate a realistic version of their naked body. As of July 2020, over one hundred thousand women had images of themselves “stripped” and shared publicly, with that number growing by just under 200% in the following three months. Of these women, 70% were non-celebrities whose photos were taken from social media or private communications. These images can be used to extort or publicly shame the victim, leading to a growing “silencing effect” amongst women who fear being exploited by AI generated sexually explicit content. Data retrieved from the AI bots creating these images also revealed that the technology had been used to produce child sexual abuse material (CSAM). At one Beverly Hills school, middle schoolers were discovered to be circulating deepfake images of their female classmates, and as AI pornography becomes more prevalent, it can be assumed that such material will only continue to spread.

Attorneys who work with victims of tech abuse have reported that deepfake content featuring non-celebrities and children is on the rise, as developments in AI technology have made the creation of such images relatively simple. Additionally, with just a clear image of a person’s face, AI users can create a minute-long pornographic video of any person of their choosing free of cost, in under half an hour.

With its rising popularity, research has revealed that seven out of ten of the top pornography websites now feature deepfake content, attracting over three-hundred million views. While the problem extends globally, a survey of US men shows that 48% have seen deepfake pornography. Furthermore, an overwhelming 74% of men reported that they did not feel guilty about viewing such content, and 20% had considered learning how to make deepfake pornography themselves. With such staggering numbers, deepfake pornography will not be disappearing anytime soon unless more serious action is taken to address the problem.

Recognizing the threat that AI poses in regards to sexual exploitation, many leaders in technology have called for a pause on AI development until the risks can be thoroughly evaluated. As of now, there are no federal laws to protect people from deepfake pornography, though there are several bills currently being discussed. The most prominent of these bills is the Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE Act), which was largely inspired by the recent events involving Taylor Swift. The bill is designed to provide victims of deepfake pornography with an avenue for legal recourse and to hold those responsible for the creation and publication of AI pornography accountable. At the moment, only ten states have legislation pending regarding such AI generated content, so the passage of the DEFIANCE Act at the Federal level would mark a large step forward in addressing AI sexually exploitative content and the deepfake pornography industry throughout the United States.

The CSE Institute supports the passage of legislation directly addressing deepfake pornography and believes that more active steps must be taken to both prevent such content and support victims.

The CSE Institute will provide updates as they become available.

This piece is part of our first-year law student blog series. Congratulations to author Alex Taylor on being chosen!

All views expressed herein are personal to the author and do not necessarily reflect the views of the Villanova University Charles Widger School of Law or of Villanova University. 

Category: News

« Back to News
  • Learn More About The CSE Institute

    We welcome contact from organizations and individuals interested in more information about The CSE Institute and how to support it.

    Shea M. Rhodes, Esq.
    Director
    Tel: 610-519-7183
    Email: shea.rhodes@law.villanova.edu

    Prof. Michelle M. Dempsey
    Faculty Advisor
    Tel: 610-519-8011
    Email: dempsey@law.villanova.edu

    Contact Us »