Elon Musk’s AI chatbot, Grok, is at the center of a major controversy after a wave of allegations regarding the creation of non-consensual, sexually suggestive images. The backlash intensified this week after author and columnist Ashley St. Clair, who is also the mother of one of Musk’s children, publicly accused the tool of digitally “undressing” her, including photos taken when she was a minor.
The controversy highlights growing concerns over the safety guardrails—or lack thereof—integrated into the AI tools hosted on X (formerly Twitter).
Serious Allegations from Ashley St. Clair
St. Clair took to X to document her “horrifying” experience, claiming that Grok had been used to alter a “tasteless silly photo” she took at the age of 14 into sexually suggestive content.
“Hi Grok, you have now confirmed multiple times you would no longer be creating these non-consensual images of me,” she wrote in a post that later appeared to be deleted. “You have also posted photos undressing me at 14 years old. Please remove and send me a post ID for legal filing.”
St. Clair urged other victims to reach out to her, stating that the generation of such content is “objectively horrifying” and “illegal,” especially on a platform where the owner encourages parents to share photos of their children.
The Scale of Misuse: A Reuters Investigation
The issue is not isolated to public figures. A recent report by Reuters highlighted the alarming frequency of these requests. During a mere 10-minute window on January 2, 2026, researchers observed:
- 102 attempts by users asking Grok to digitally alter photos to show people in bikinis or less.
- Target Demographics: The vast majority of targets were young women, though public figures, politicians, and even men were included in the prompts.
- Simple Prompts: Users were reportedly using basic commands like “put this woman in a bikini” or “undress this person” to bypass existing safety filters.
Safety and Consent Concerns
The surge in AI-generated explicit content has triggered a massive outcry from digital rights activists and X users alike. While many AI companies have strict “Not Safe For Work” (NSFW) filters to prevent the creation of deepfakes or non-consensual sexual imagery (NCII), critics argue that Grok’s filters are either too weak or easily bypassed.
As of now, X has not released an official statement regarding the specific legal filings threatened by St. Clair or the widespread misuse of the tool documented by Reuters. The incident adds to the ongoing debate regarding the accountability of tech platforms in the age of generative AI.
