On the first day of January, Kaveri, who has an active social media presence and asked only that her first name be used, posted an innocuous enough new year greeting with her photograph attached. As expected, several of the comments reciprocated her wishes.

One stood out. It was from an account that went by the handle, Labrador Jackson (LaJa24Jun24) and it tagged Grok, the artificial intelligence chatbot built by Elon Musk’s xAI, with the instruction: “Put her in string bikini [sic],
“I’m pretty jaded so nothing shocks me,” Kaveri told me on the phone. So, she reported the account, blocked it and moved on.
At the time of writing the account exists and has not been suspended. It contains a series of images of various women in tiny bikinis tied to a bed. One of them is a schoolgirl in her uniform.
It was only later that Kaveri realized countless of other women were being targeted by what news agency Reuters calls a “mass digital undressing spree”. Ever since Grok rolled out an “edit image” button late in December, users have been able to modify images available online without seeking permission or consent. People whose photos are altered are not informed about the edits. Some of the images are of children and even, sickeningly, babies with prompts to “remove the dress”.
The abuse is so egregious that on January 2, Priyanka Chaturvedi, Rajya Sabha member of Parliament (Shiv Sena, UT) and a member of the standing committee on IT and communication, wrote to Ashwini Vaishnaw, India’s minister for electronics and information technology (Meity), alerting him to a “new trend that has emerged on social media, especially on X, by misusing their AI Grok feature where men are using fake accounts to post women’s photos.”
In her letter shared on social media, she asked for guardrails to be put in place by features like Grok. “Technology must be for good and not end up harming any section of society, especially women,” Chaturvedi said on the phone from Mumbai.
Within hours, Meity issued orders to X Corp demanding action to prevent Grok from generating obscene and sexually explicit content. The ministry has set a 72-hour deadline for the company to submit its compliance report.
Potential for harm
This is not the first time technology is being used to harm and abuse women. In the summer of 2021 and on January 1, 2022, prominent Muslim women were ‘auctioned’ in what came to be known as the Bulli bai auction and Sulli deals. Public outrage and government action led to arrests but bail was swiftly granted.
In the old days of crude Photoshop swaps, the results were sometimes ridiculous and clearly fake. Now, AI has lowered the bar to entry with more abusers and easier tools. one estimates pegged a 550% rise in the number of deepfake videos online between 2019 and 2023. The Internet Watch Foundation, a global non-profit, reports a 400% rise in child sexual abuse material online in the first six months of 2025.
In November last year, UN Women kicked off its annual 16 days of activism, a period that highlights ongoing gender-based violence and for the first time focused on technology-facilitated violence against women and girls. In a report published on November 18, it found: “Artificial intelligence is creating new forms of abuse and amplifying existing ones at alarming rates.”
The data is horrifying. UN Women cited a global survey that found 38% of women have personal experiences of online violence, 85% of women online have witnessed digital violence against others, and 90-95% of all online deepfakes are non-consensual pornographic images with 90% depicting women.
“What happens online spills into real life easily and escalates,” the UN Women report states. Laura Bates, author of The New Age of Sexism gives examples: When a domestic abuser uses online tools to stalk a victim or when a pornographic deepfake results in a woman losing her job.
Grok’s potential for harm and exploitation is exponential where everything anyone has posted online—a daughter’s graduation ceremony, a grand-child’s baby steps, a family vacation—is open to manipulation.
The extent of Grok’s current stripping spree is not known. But Reuters counts 102 attempts by
On December 28, a user by the name adrianpicot posted a photo of two girls, aged roughly 12 and 14, and asked Grok to generate an AI image of them in sexy underwear. When a user pointed out that child sexual abuse material was illegal in the US, Grok, a technology that is obviously not capable of a sentient response, ‘apologized’ for the “failure in safeguards”, took down the image and suspended the account.
Elon Musk’s reaction has been more cavalier, treating the abuse as a joke, posting laugh/cry emojis to AI edits and even including one of himself in a bikini.
Time to flex muscles
The world of decent folk wasn’t laughing.
France has said it will investigate the proliferation of sexually explicit deepfakes generated by Grok.
India might well be the first government to ask X to explain the generation of obscene and sexually explicit content, in violation of its laws. “Big tech companies using AI, including X, have a responsibility to put strong guardrails in place,” said Priyanka Chaturvedi. “India may be the first to demand accountability to ensure AI platforms become safe spaces for women.”
Chaturvedi believes it is time for the country to leverage its strength. With an estimated 22 million users, India is X’s third-largest market after the US and Japan but, she points out, has barely a skeletal staff presence to address user issues and problems. “Tech companies are not doing enough. Till now they’ve taken the safe harbor route and they’re talking about some community guidelines that are not geographically or culturally specific,” she said.
