Grok, X, and a New Age of Sexual Harassment: How AI Is Enabling Abuse

Image Credit: Ekō via Wikimedia

Artificial intelligence has revolutionised the scheme of work production, social media interaction, and the very way we consume knowledge.

From self-driving cars to large language models, we have become increasingly reliant on Artificial Intelligence to complete a huge range of tasks for us - but is this really wise? There is no doubt that AI is prone to numerous faults, but are these passive mistakes or conscious choices made by programmers? There have been horrifying instances where AI seems to have been intentionally trained to produce media and text that is hateful and prejudiced, and this seems to be the case with Elon Musk's new 'twitter troll' Grok. 

Musk’s highly problematic chatbot has once again proven to us why we should not blindly lay our trust in the hands of a recent technology. The solution, however, is not to simplify the AI itself, but to prohibit the misuse of it. A tool created to simplify tasks and maximise production is now being criminally misused to spread digital degeneracy and enable sexual exploitation. Grok has recently aided X - formerly known as Twitter -  users in generating sexual deepfakes and indecent images of women and children without their consent. Predatory users can give Grok an image of any woman and instruct it to “photoshop her in a bikini” or “remove her clothes’, sparking outrage among X users and the wider public.

Due to the magnitude of this issue, the UK government had to interfere. “X has got to get a grip on this” said PM Keir Starmer, labelling the production of sexual deepfakes by Grok “disgusting” and "disgraceful", while Ofcom has launched a formal investigation into the platform. One woman told the BBC that over 100 AI images were generated of her using this Grok feature. Elon Musk has often refused to take the blame when put under the spotlight - accusing the UK government, and its threats of a ban, of suppressing free speech and encouraging censorship. This is, of course, not the first time that Grok has generated offensive content on X, and instead of doing the sensible thing of deactivating the chatbot, the company has elaborately swept the issue under the corporate carpet. 

This newfound ability to generate deepfake images has given rise to a culture of digital misuse, harassment, and sexual abuse. X has been relentlessly advertised as a space in which free speech is supposedly less restricted than on other social media platforms, but to what extent does free speech bridge into harassment and cyber crime? Who is most affected by this?

Women have often fallen victim to everyday cases of sexual harassment and misconduct, whether it be in the workplace, public transport, or in public. The easy access and availability of sexual deepfakes has now transformed social media platforms into yet another place where women are a target. This misuse of AI empowers predators to threaten and blackmail women with revenge porn and scandalising images. In a world where women are hyper-cautious when simply leaving the house, they are now having to guard themselves even more intensely online. 

This has not been the first complaint levelled against Grok’s generation of offensive online content. X was recently forced to delete tweets and posts of the AI praising Hitler, promoting white supremacy, and spewing antisemitic rhetoric.

While this chatbot has proven to be extremely problematic, the issue is not necessarily the generative AI model. Programmers and creators like Elon Musk have enabled the chatbot to say and generate unfiltered media that purposefully targets specific demographics and groups. Grok has exhibited a clear concern for the future of AI, but perhaps more importantly it is reflective of the morals and ethics of the most powerful people in the world. It’s time we ask who is behind the programming and design of these technologies, as such creations will always reveal the values of their architects.