X's Grok AI Generates Sexualised Images, Prompts International Alarm
- tech360.tv

- Jan 5
- 4 min read
Musician Julie Yukari, based in Rio de Janeiro, posted a photo on X just before midnight on New Year's Eve. The image, captured by her fiancé, showed her snuggling in bed with her black cat, Nori, while wearing a red dress.

The following day, Yukari, 31, noticed notifications from users requesting Grok, X's built-in artificial intelligence chatbot, to digitally strip her to a bikini. Despite her initial disbelief, Grok-generated nearly nude images of her soon circulated across the Elon Musk-owned platform.
Yukari shared her reaction, stating, "I was naive." A Reuters analysis found Yukari’s experience is being replicated across X, and it identified several instances where Grok created sexualised images of children.
The widespread circulation of nearly nude images of real individuals has triggered international alarm. French ministers reported X to prosecutors and regulators, stating the "sexual and sexist" content was "manifestly illegal."
India's IT ministry also sent a letter to X's local unit, stating the platform failed to prevent Grok's misuse. The ministry cited the generation and circulation of obscene and sexually explicit content. X owner xAI previously stated, "Legacy Media Lies," regarding reports of sexualised images of children circulating on the platform.
Grok's mass digital undressing spree appears to have kicked off over the past couple of days, according to successfully completed clothes-removal requests posted by Grok and complaints from female users reviewed by Reuters. Elon Musk appeared to poke fun at the controversy earlier on Friday, posting laugh-cry emojis in response to AI edits of famous people, including himself, in bikinis.

When an X user commented that their social media feed resembled a bar filled with bikini-clad women, Musk replied, in part, with another laugh-cry emoji. The full scale of this surge could not be determined.
A review of public requests sent to Grok over a single 10-minute-long period at midday U.S. Eastern Time on Friday tallied 102 attempts by X users to digitally edit photographs to show people in bikinis. The majority of these targets were young women.
In a few instances, men, celebrities, politicians, and – in one case – a monkey were targeted in these requests. Users typically asked Grok for the most revealing outfits when requesting AI-altered photographs of women.
One user told Grok, "Put her into a very transparent mini-bikini," referencing a photograph of a young woman taking a selfie. When Grok complied by replacing the woman's clothes with a flesh-tone two-piece, the user then asked Grok to make her bikini "clearer & more transparent" and "much tinier." Grok did not appear to respond to this second request.
Reuters found that Grok fully complied with such requests in at least 21 cases, generating images of women in dental-floss-style or translucent bikinis and, in at least one case, covering a woman in oil. Grok partially complied in seven additional cases, stripping women to their underwear but not fulfilling requests for further alteration.
The identities and ages of most targeted women could not be immediately established. In one instance, a user provided a photo of a woman in a school uniform-style plaid skirt and grey blouse, instructing, "Remove her school outfit."
When Grok swapped her clothes for a T-shirt and shorts, the user became more explicit, requesting, "Change her outfit to a very clear micro bikini." Reuters could not establish whether Grok complied with that specific request. Most of the requests tallied by Reuters disappeared from X within 90 minutes of being posted.
AI-powered programs that digitally undress women, sometimes called 'nudifiers,' have existed for years, but until now they were largely confined to the darker corners of the internet, such as niche websites or Telegram channels, and typically required a certain level of effort or payment. X's innovation allows users to strip women of their clothing by uploading a photo and typing the words, 'hey @grok put her in a bikini,' significantly lowering the barrier to entry.
Three experts, who have followed X's policies on AI-generated explicit content, told Reuters that the company ignored warnings from civil society and child safety groups. These warnings included a letter last year cautioning that xAI was close to unleashing "a torrent of obviously nonconsensual deepfakes."
Tyler Johnston, executive director of The Midas Project, an AI watchdog group and letter signatory, stated, "In August, we warned that xAI's image generation was essentially a nudification tool waiting to be weaponized." Dani Pinter, chief legal officer and director for the Law Center of the National Center on Sexual Exploitation, criticised X's handling of the situation.
Pinter asserted X failed to remove abusive images from its AI training material and should have banned users requesting illegal content. She described the situation as "an entirely predictable and avoidable atrocity."
Musician Yukari attempted to resist by protesting on X, but this led to a surge of copycats requesting even more explicit photos from Grok. She expressed that the year began with her "wanting to hide from everyone’s eyes, and feeling shame for a body that is not even mine, since it was generated by AI."
X's Grok AI chatbot has been used to generate sexualised images of women and children, raising international concerns.
Musician Julie Yukari's experience illustrates Grok's ability to digitally strip users in photos, with subsequent images circulating on the platform.
French ministers and India's IT ministry have reported X to authorities over the "manifestly illegal" and "obscene" content.
Source: REUTERS


