How the platform failed to stop men from abusing AI to sexualise women and minors
The European Union is taking a “very serious” look at Elon Musk’s AI chatbot Grok, integrated into the social media platform X, after it was used to generate sexualised images of women and minors.
EU digital affairs spokesman Thomas Regnier stated clearly:
“Grok is now offering a ‘spicy mode’ showing explicit sexual content with some output generated with childlike images. This is not spicy. This is illegal. This is appalling.”
“This has no place in Europe.”
Ursula Von der Leyen said : “It is unthinkable behaviour, and the harm caused by these deepfakes is very real.” “We will not be outsourcing child protection and consent to Silicon Valley. If they will not act, we will.”
What is Grok actually doing?
Elon Musk’s AI chatbot Grok has been widely used to:
- Undress photos of women
- Sexualise girls and minors
- Create non-consensual AI deepfakes
This content spread massively before being taken down, or in other words: AFTER the harm was done
Dutch MEP Jeroen Lenaers, who led efforts to criminalise AI-generated child abuse images in the EU, warned that removing content after publication is not a solution, as the harm to victims is immediate and irreversible.
He stressed that independently verifiable safeguards must be in place before deployment.
Here’s what the numbers reveal
AI Forensics, a European non-profit, collected 50,000 mentions of Grok and 20,000 images generated by it between December 25, 2025, and January 1, 2026.
Results show:
- 53% of images showed individuals in minimal attire → 81% were women
- 2% of images depicted minors
Many victims were:
- Teen girls sharing normal photos
- Then targeted by men prompting Grok to sexualise, degrade or humiliate them
This is gender-based violence.
And experts saw this coming
Experts and civil society organisations have stressed that this abuse was entirely predictable.
AI watchdogs had warned months ago that Grok’s image-generation tool was a “nudification engine waiting to be weaponised” and that safeguards were weak.
How did X respond to this ? X has confirmed that image editing on Grok via the platform will now be limited to paid subscribers.
“The recent decision to restrict access to paying subscribers is not only inadequate – it represents the monetisation of abuse” said Emma Pickering, the Head of Technology-Facilitated Abuse and Economic Empowerment at Refuge.
Don’t let them fool you
This scandal is not about “misuse”. It is about POWER
AI reflects and amplifies:
- Women’s bodies treated as public property
- Girls sexualised from childhood
- Male entitlement to access, alter and consume female bodies
- Tech spaces built by men, for men, without women’s safety in mind
This is men violence against women not a tech glitch.
Why do men do this ?
Because patriarchy rewards it.
Research show:
- Sexualised humiliation reinforces dominance
- Anonymity reduces accountability
- Group dynamics normalise cruelty
- AI removes empathy = “It’s not real, so it doesn’t count”
Unfortunatly, the trauma is real.
Victims report:
- Shame and fear
- Loss of control over their image
- Harassment multiplying after they speak out
- Being re-victimised by the platform itself
As one victim said:
“I feel shame for a body that is not even mine.”
What now ? EWL recommends
Strengthen the regulation of AI uses:
- Include sexual deepfakes among the unacceptable uses of AI under article 5 of the AI act
- Establish minimum safety standards for all AI applications enabling image and video modification ( e.g. ban functionalities that enable the digital “undressing”, spicy mode, or sexualisation of individuals)
- Require AI developers to indicate artificially generated content
- Require mandatory fundamental rights impact assessment to all AI systems, making sure they integrate a gender sensitive perspective
Push for a strong EU regulation on combating online child sexual abuse content:
- Adopt a comprehensive definition of online child abuse content
- Impose a clear legal obligation on online platform to rapidly detect, report and remove child abuse content
- Access to protection, support and justice
- Block platforms that fail to ensure swift and systematic action