The Price of “Free Speech”: Grok AI and the Escalation of Digital Misogyny

The advent of artificial intelligence ushered in an unprecedented, halcyon era in which we predicted that processes in precarious areas such as healthcare, transportation, and manufacturing would be strengthened, and the potential risk of false diagnoses and machine failures would decrease due to AI’s omniscient, analytical oversight. Instead, it has been used for far more nefarious purposes, from spreading misinformation to large-scale surveillance to what appears to be its new low point – undressing women on Twitter.

The Grok AI problem: a platform without guardrails

Over the past two months, predominantly male Twitter/X users have used Grok AI (X’s AI chatbot) to produce non-consensual sexually suggestive images of women on the platform. While other AI platforms are open to increasing eroticism – OpenAI plans to introduce “adult mode” on ChatGPT – they typically have restrictions on inappropriately altering images of existing people. However, Deepfake Porn is by no means a new trend – according to a 2019 study, a staggering 96% of deepfakes were pornographic in nature, and 99% of the targets of this image-based sexual abuse were women.

The abuse enabled by Grok is not a bug, but a feature. Grok operates with Elon Musk’s Silicon Valley mindset of “move fast and break things” and is intended to promote free speech, an agenda for which Musk himself has faced intense backlash over the nature of his political views.

The abuse enabled by Grok is not a bug, but a feature. Grok operates with Elon Musk’s Silicon Valley mindset of “move fast and break things” and is intended to promote free speech, an agenda for which Musk himself has faced intense backlash over the nature of his political views. The lack of security measures

Source: Sky News

The technical ease with which Grok can create these images is alarming. In a matter of seconds, a woman’s online reputation can be destroyed by a series of images that resemble her and show her in compromising, sexual scenarios that she never agreed to. The speed of image production and its integration into the platform means that all it takes is a quick sexually explicit request, and algorithmic amplification designed to maximize the distribution of shocking and sensational content ensures rapid dissemination of sexual images across the platform. Even when victims report such content, moderation response is inconsistent at best. Many report that their complaints remain unaddressed while the offending images remain widely available.

Using misogyny as a weapon: high-profile cases of intimidation in India

This trend is worrying. Gauri Lankesh, a journalist and activist, was murdered In 2017, she left her residence because of her left-wing and anti-Hindu ideologies after receiving numerous online and offline threats, similar to the fate of female journalists reporting from conflict and election zones in India. Undoubtedly due to the atmosphere of hatred and hostility established by the BJP, with the rise of those in power, the amount of online message spewed against women journalists has increased. One target of a particularly vicious form of this was Rana Ayyub, an investigative and political journalist who published a book about the complicity of Narendra Modi and Amit Shah, the prime minister and BJP president, in a series of riots in western Gujarat in 2002. In addition to investigating a series of murders involving Amit Shah from 2002 to 2006, she also writes regularly about caste hierarchies and minority marginalization and violence. Rana Ayyub was the subject of a “seemingly coordinated social media campaign shaming and slut-shaming.” manipulated images using sexually explicit language and threatening rape.” Misattributed quotes about child rapists and hatred of India have been doing the rounds in several pro-Hindu publications in the Indian media, less concerned with factual accuracy than with spreading misogyny, whose nationalist audience did not hesitate to unleash a barrage of gang-rape threats and call for violence and cruelty against the so-called “anti-nationalists”. But then came a new low – a two-minute, 20-second pornographic video of a sex act in which her face morphed into that of another woman.

Misogyny has historically manifested itself in vengeful and violent forms. Attacks on a woman’s political stance often lead to even worse backlash against a woman’s sexuality or, in particular, her sexual impropriety, reminding her of her powerlessness as a woman and the fragility of her character. It is not enough to brand her as a Congress traitor – she must also be “branded with the scarlet letter and paraded as a whore.” “The pornographic video poses the greatest threat of all, the woman literally becomes a canvas for the perverse nature of the Trollsa grotesque eroticism to live out; a video affirmation of the true place of women in the hierarchy of women, the state and journalists. Of course, the presence of the sexual was a clear confirmation of the differences that gender creates, but how far does that really go?”

The human cost: psychological and professional devastation

Deepfake images, like those generated by Grok’s artificial intelligence by the thousands every hour, have serious social, psychological and professional impacts on women. The purpose of generating such images was the same as any misogynistic endeavor on the Internet in a patriarchal society: the hypersexualization and public humiliation of the targeted women. One of the most devastating facets of this online abuse is its inevitability – because such material swarms the interwebs, one can never truly ensure that such images will not be stored forever in personal or public archives. The blow of this violation of privacy and dignity can be compounded by the question of their online permanence.

Additionally, studies have found that women who are victims of this form of sexual abuse are more likely to experience “depression, anxiety, non-suicidal self-harm, and suicidal ideation.” The consequences for professional life are just as devastating. In 2022, the International Review of Victimology found that image-based sexual abuse had a direct impact on women’s employment and education. Women in customer service jobs described difficulty interacting with customers and concentrating at work. Fear of how others might perceive them or whether a stranger had seen their photos was a real problem for victims of image-based sexual abuse.

The choice before us is clear: either we impose real restrictions on how AI can be used to harm others, or we accept a future in which technological innovation serves as a breeding ground for misogyny, harassment and abuse. There is no neutral ground, no technological determinism that absolves us of moral responsibility. As long as platforms like

Insha Hamid works in film and television and has a strong interest in intersectional feminism, public policy, and how progress can be achieved at the intersection of economic development and social justice. When she’s not immersed in a philosophy book or writing a political article, you can find her headbanging at a death metal gig, shredding a rock song on the drums, or filming a horror movie with her Canon 6D Mark II.