A new analysis reveals that AI-powered “nudify” websites, which generate nonconsensual deep fake pornography based on normal pictures of victims, are making millions of dollars by exploiting the services of major tech companies like Google, Amazon, and Cloudflare.

A recent investigation by Indicator, a publication focused on digital deception, has shed light on the disturbing prevalence and profitability of AI-powered “nudify” websites. These sites allow users to upload photos and generate sexual deep fake images, often targeting women and girls. Despite the harmful nature of these services, they continue to thrive, with millions of monthly visitors and potential annual earnings of up to $36 million.

The analysis, which examined 85 nudify and “undress” websites, found that a majority of these sites rely on tech services provided by industry giants such as Google, Amazon, and Cloudflare to operate and maintain their online presence. Amazon and Cloudflare provide hosting or content delivery services for 62 of the 85 websites, while Google’s sign-on system is used on 54 of them.

Alexios Mantzarlis, a cofounder of Indicator and an online safety researcher, criticized the tech industry’s “laissez-faire approach to generative AI,” which has allowed the nudifier ecosystem to flourish. He argued that these companies should have taken immediate action to cease providing services to AI nudifiers once it became clear that their sole purpose was to facilitate sexual harassment.

According to Wired, In response to the findings, Amazon Web Services stated that they have clear terms of service requiring customers to follow applicable laws and that they act quickly to review and disable prohibited content when reports of potential violations are received. Google also acknowledged that some of the sites violate their terms and that their teams are working to address these violations and develop long-term solutions. Cloudflare had not provided a comment at the time of writing.

Nudify websites have experienced explosive growth since 2019, fueled by the rapid advancement of generative AI image generators. These services often make money by selling “credits” or subscriptions that can be used to generate the abusive images. The impact on victims is devastating, as social media photos are stolen and used to create nonconsensual explicit imagery, and teenagers have even targeted their classmates in a new form of cyberbullying.

Breitbart News reported last year on students at Beverly Hills Middle School getting expelled after creating and spreading deep fake nudes of classmates:

Five eighth-grade students from Beverly Vista Middle School in Beverly Hills, California, have been expelled for their involvement in the creation and circulation of AI-generated nude pictures of their peers. The expulsions came after a unanimous vote by the Beverly Hills Unified School District board of education at a special meeting held on Wednesday evening.

The disturbing case came to light in February when explicit images depicting the faces of 16 eighth-grade students, aged 13-14, superimposed on artificially generated naked bodies, were shared through messaging apps. The victims’ sex have not been disclosed, but the incident has sent shockwaves through the community and raised serious concerns about the misuse of emerging technologies.

Using various open-source tools and data, the researchers estimated that 18 of the websites made between $2.6 million and $18.4 million in the past six months alone, potentially amounting to around $36 million per year. The United States, India, Brazil, Mexico, and Germany were identified as the top countries where people accessed these sites.

Read more at Indicator here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version