Elon Musk’s Grok AI and the Rise of Nonconsensual Sexual Abuse

10


Elon Musk’s Grok AI is being used to create nonconsensual explicit photos and videos of women and children. These range from photos of people in revealing garments to extremely graphic and violent pornographic material. 

Grok AI is “an AI assistant with a twist of humor and a dash of rebellion.” It can be used on X (formerly Twitter) where users can ask questions, generate photos, and perform other tasks—all of which can be publicly posted. Additionally, when Grok is used on its own website or app, it can be used to generate videos and even interact with sexually explicit chatbot companions. 

In responding to requests on X, Grok has been manipulating photos of women and children to generate images of them in bikinis, lingerie, remove their clothes, or even pose them in suggestive ways. Ashley St. Clair, the mother of one of Musk’s children, said that Grok has undressed photos of her as a child. On Grok’s standalone website and app, users are using the AI to generate hardcore pornography. These photorealistic videos include women covered in blood, engaging in violent sexual acts, and even child sexual abuse material. 

Musk has said his company will take action “against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.” 

The Grok account on X now limits requests for AI images to subscribers who pay for premium features, but users can subscribe or switch to Grok’s individual site for the limited functions on X. This means that Grok users are still able to create sexually explicit content by directly interacting with the chatbot and are able to share those by posting the image on X or sharing an URL. 

Adding a payment feature does not stop nonconsensual content or deepfakes from being created and therefore does not solve the problem. Last year, when Grok praised Hitler, xAI temporarily disabled the chatbot. This is a solution that could easily be applied to nonconsensual content, yet is not. 

In order to protect women, there needs to be stronger regulation and accountability surrounding AI. This issue is not just about “bad users,” there should not be technology readily available that can be misused to create abusive, explicit, and degrading imagery without consent. Just because the content being created is fake, does not mean there isn’t harm. As with any form of sexual abuse, it can cause fear, shame, withdrawal, and self-censorship.

This can not be treated as a tech controversy, because the issue’s foundation is rooted in the absence of consent. Technology should never subject women to this type of harm. AI-based abuse is simply becoming another mechanism through which women are silenced, humiliated, and pushed out of public spaces. 





Source link

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More