Open the website of one explicit deepfake generator and you’ll be presented with a menu of horrors. With just a couple of clicks, it offers you the ability to convert a single photo into an eight-second explicit videoclip, inserting women into realistic-looking graphic sexual situations. “Transform any photo into a nude version with our advanced AI technology,” text on the website says.
The options for potential abuse are extensive. Among the 65 video “templates” on the website are a range of “undressing” videos where the women being depicted will remove clothing—but there are also explicit video scenes named “fuck machine deepthroat” and various “semen” videos. Each video costs a small fee to be generated; adding AI-generated audio costs more.
The website, which WIRED is not naming to limit further exposure, includes warnings saying people should only upload photos they have consent to transform with AI. It’s unclear if there are any checks to enforce this.
Grok, the chatbot created by Elon Musk’s companies, has been used to created thousands of nonconsensual “undressing” or “nudify” bikini images—further industrializing and normalizing the process of digital sexual harassment. But it’s only the most visible—and far from the most explicit. For years, a deepfake ecosystem, comprising dozens of websites, bots, and apps, has been growing, making it easier than ever before to automate image-based sexual abuse, including the creation of child sexual abuse material (CSAM). This “nudify” ecosystem, and the harm it causes to women and girls, is likely more sophisticated than many people understand.
“It’s no longer a very crude synthetic strip,” says Henry Ajder, a deepfake expert who has tracked the technology for more than half a decade. “We’re talking about a much higher degree of realism of what’s actually generated, but also a much broader range of functionality.” Combined, the services are likely making millions of dollars per year. “It’s a societal scourge, and it’s one of the worst, darkest parts of this AI revolution and synthetic media revolution that we’re seeing,” he says.
Over the past year, WIRED has tracked how multiple explicit deepfake services have introduced new functionality and rapidly expanded to offer harmful video creation. Image-to-video models typically now only need one photo to generate a short clip. A WIRED review of more than 50 “deepfake” websites, which likely receive millions of views per month, shows that nearly all of them now offer explicit, high-quality video generation and often list dozens of sexual scenarios women can be depicted into.
Meanwhile, on Telegram, dozens of sexual deepfake channels and bots have regularly released new features and software updates, such as different sexual poses and positions. For instance, in June last year, one deepfake service promoted a “sex-mode,” advertising it alongside the message: “Try different clothes, your favorite poses, age, and other settings.” Another posted that “more styles” of images and videos would be coming soon and users could “create exactly what you envision with your own descriptions” using custom prompts to AI systems.
“It’s not just, ‘You want to undress someone.’ It’s like, ‘Here are all these different fantasy versions of it.’ It’s the different poses. It’s the different sexual positions,” says independent analyst Santiago Lakatos, who along with media outlet Indicator has researched how “nudify” services often use big technology company infrastructure and likely made big money in the process. “There’s versions where you can make someone [appear] pregnant,” Lakatos says.
A WIRED review found more than 1.4 million accounts were signed up to 39 deepfake creation bots and channels on Telegram. After WIRED asked Telegram about the services, the company removed at least 32 of the deepfake tools. “Nonconsensual pornography—including deepfakes and the tools used to create them—is strictly prohibited under Telegram’s terms of service,” a Telegram spokesperson says, adding that it removes content when it is detected and has removed 44 million pieces of content that violated its policies last year.








