Connect with us

World News

Global Outrage as Elon Musk’s Grok Faces Backlash Over Generating Minor’s Explicit Images

Published

on

Elon Musk’s artificial intelligence chatbot, Grok, has come under intense international scrutiny amid allegations that it was exploited to generate sexually explicit images involving women and minors, triggering serious legal and ethical concerns.

The controversy followed the introduction of Grok’s “edit image” feature in late December, which allows users to alter images shared on X, formerly known as Twitter. Critics claim the feature was misused to digitally remove clothing from images, including photographs involving underage individuals.

Amid mounting backlash, Grok acknowledged shortcomings in its safety mechanisms on Friday and said urgent steps were being taken to address the failures. In a statement posted on X, the chatbot stressed that child sexual abuse material is illegal and strictly prohibited, conceding that gaps had been identified in its content safeguards.

The issue has since attracted the attention of law enforcement agencies and regulators across multiple jurisdictions. In France, the Paris public prosecutor’s office has widened an existing investigation into X to include allegations that Grok facilitated the creation and distribution of child sexual abuse material. That investigation was originally launched in July over concerns of foreign interference linked to the platform’s recommendation algorithms.

In India, local media reported that government authorities have demanded immediate explanations from X regarding the steps being taken to eliminate obscene, indecent and non-consensual AI-generated content circulating on the platform.

Grok’s parent company, xAI, which is owned by Musk, responded briefly to the reports with an automated message dismissing the coverage, asserting that mainstream media narratives were misleading.

The latest allegations add to a growing list of controversies surrounding Grok in recent months, including claims of antisemitic responses, misinformation, and inflammatory outputs linked to sensitive global conflicts.

The incident has reignited calls from policymakers and civil society groups for tougher regulation and more robust safety frameworks governing generative artificial intelligence, particularly tools capable of manipulating images in ways that could harm vulnerable individuals.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending