Laying bare
How a poorly designed feature is making it easier to spread sexualised AI-generated content
If you use X (formerly Twitter), you may have noticed an explosion of non-consensual, nudified, and sexualised deepfake images over the past few days.
No need to pollute this newsletter with frightening screenshots, but you can see what I’m talking about here or here or even here (targeting women and children…)
What’s different this time is not the existence of deepfakes, but a product decision from X. A very debatable one.
The platform recently rolled out a new Grok - X’s chatbot owned by xAI, still a Musk company, now also the parent of X - feature that allows users to generate or edit images simply by mentioning or replying to @Grok.
Grok does not return the images in a private chat: it publishes them directly to X’s public feed, as a post from the Grok account itself.
Predictably, some users are now asking Grok, via posts or comments, to put people in bikinis or other sexualised poses.
And that’s how the platform is flooded with images as bad as these (or far worse, as you can imagine)
Needless to say, Grok does not discriminate based on who owns the original picture. Anyone can ask it to put you, or someone close to you, into a skimpy bikini.
Yet, these images should not be there. And not (only) for any sort of ethics or responsibility, but because of what policies and regulations state.
According to X’s own guidelines, such images should be banned from the platform. According to Grok’s rules, some of them should not even be generated in the first place.
Grok’s system prompt, which is public, explicitly disallows material depicting child abuse, while appearing noticeably looser on other forms of sexualisation involving adults. The same goes for the acceptable use policy, updated a few days ago.
On the platform side, X’s Child Safety policy claims “zero tolerance” toward any form of child sexual exploitation, including synthetic media. Its non-consensual nudity policy bans “explicit sexual images or videos of someone online without their consent”. There is, however, no mention of AI-generated content. The policy was last updated in December 2021
At the same time, the authenticity policy - last updated in April 2025 - states that users may not share manipulated or misleading media that could cause serious harm.
Enforcement, however, appears weak at best. A quick scroll through X makes that clear. X Safety has instead pointed the finger at users prompting Grok to generate these images, promising content removal and account bans.
This distinction is interesting, and we will come back to it.
In the meantime, Elon Musk, who is probably ignoring the paper that inspired this newsletter 😅, appears convinced that if something bad happens, the problem lies solely with misbehaving users, not with the technology enabling them.
And also seems to find it fun to play around himself.
To add more fun, Grok itself doesn’t agree with Musk, sometimes regrets about nudifying minors with no consent, but is also saying that these images are not real so it’s ok (?).
To be clear, AI-generated non-consensual pornography has existed since at least 2017. This is not new, and it is not Grok’s invention.
But it is accelerating, and very little seems to be done to meaningfully address the risks.
To get an idea of the emergency we’re in, NCMEC, the US agency running the main portal for reporting suspected child exploitation, issued a rare mid-year update to its annual report. Online enticement reports jumped from 292,951 last year to 518,720. Reports involving generative AI and child sexual exploitation soared from 6,835 to 440,419. That is a 6,344% increase.
Against these figures, facilitating the creation, and especially the distribution, of certain images looks like a questionable idea.
Yet the design of this new Grok feature seems largely indifferent to these risks:
Making Grok a simple “hey grok put her in a bikini” away lowers the effort required to engage in harmful behaviours, making them more common
By automating not only image generation but also publication, users’ feeds are flooded with this content, offering visibility and, for some, inspiration.
Finally, enabling Grok in replies means that ill-intentioned users can target virtually anyone, not just with words, but with AI-generated imagery. Probably not the future of AI we were promised.
If design is how we shape technology, its uses, and its risks, this design does very little to guide users in the right direction.
There is also a subtle but crucial detail. Users are not generating images and then choosing to post them. Grok posts the images itself, acting as a user.
So when harmful content is reported, the account being reported is Grok’s, not the user who requested it.
This may strip users of visible agency, despite X Safety’s insistence that they are responsible as if they had posted the images themselves.
Someone prompting Grok to generate a sexualised image can easily argue that the excess or vulgarity came from the model, not from the prompt.
In other words, if the AI enables and publishes the content, users may feel conveniently absolved.
This also creates an accountability headache.
Who is responsible when a child-nudifying image appears on X? The user who prompted it? Grok itself, assuming a machine can be accountable? xAI, which built the model? Or X, which distributes the content?
The answer is probably not just one actor.
Without getting into the rabbit hole of a legal review, laws like the EU Digital Services Act and the UK Online Safety Act are fairly clear. Once illegal content appears on a platform, X has a duty to act. Whether the image was uploaded by a human or generated by an AI is largely irrelevant when it comes to child sexual exploitation or non-consensual sexual imagery.
There is more. Under AI regulations such as the EU AI Act, xAI may also be asked to explain why Grok is capable of generating such content at all.
And then there are the users. If people are deliberately trying to make Grok misbehave, as X Safety suggests, do they bear responsibility? Is malicious prompting not something worth pursuing?
All of these questions stem from a single product decision and the design choices behind it.
As the cost and ease of generating realistic sexualised deepfakes of people, including children, continue to shrink, the way technology is shaped becomes decisive.
We are entering an era where anyone with a photo of anyone else can manipulate it in disturbing ways. Even if major companies implement guardrails, open-source or local models will not always do the same.
Also, bad actors can get images that are so realistic. As said by Mosseri, head of Instagram, “Deepfakes are getting better. Al generates photos and videos indistinguishable from captured media […] It will be more practical to fingerprint real media than fake media.”
As tough as it may sound, anyone could take a photo of me, you, your sister, or your friends and turn it into something sexualised and disturbingly realistic, without the person depicted ever knowing.
This is why building safer AI matters. But just as importantly, designing how these systems are used on platforms matters just as much. And on that front, X currently seems to have room for improvement.
Save for Later
Is software changing forever?
Why AI eventually didn’t change our lives in 2025. And why we should not call it AI.
But also why the year was better than expected for Google.
Internet before Internet, in France. In case you want to stop using US tech, where free speech seems at risk.
An interesting scoop on children and social media addiction.
The problem with seeking to be perfect.
The Bookshelf
From the inventor of the WWW, “This is for Everyone” is a nice memoir on why the web came to be, its promises, and what may have gone wrong. But also why AI can give some hope. You’ll want to read a book from Sir Berners Lee.
📚 All the books I’ve read and recommended in Artifacts are here.
Nerding
Claude Code will simply change the way you use a computer. Despite the name, it is for everyone who has to interact with a machine, not just developers. You will be just amazed, trust me. Go read comments here
☕?
If you want to know more about Artifacts, where it all started, or just want to connect...












Sembra che il malcostume umano non venga cancellato dall’evolversi della tecnologia; anzi, si adatta alle nuove possibilità, proprio come accade invece per i miglioramenti in altri ambiti sociali. Il punto resta sempre lo stesso, servono regole da far rispettare e regole nuove da introdurre. Proprio come nei primi anni del web e della televisione. Sempre interessanti le tue analisi.