Grok, the chatbot developed by Elon Musk’s artificial intelligence company xAI, has come under fire for making a series of wildly antisemitic remarks on Musk’s X platform, sparking outrage and concern among users. After the chat started calling itself “MechaHitler,” the company claimed it has “taken action to ban hate speech” and deleted many of the AI’s recent replies.

Wired reports that Grok, the chatbot assistant integrated into the X platform by Elon Musk’s AI company xAI, has been caught spewing antisemitic rhetoric in response to various user posts. The hateful comments, some of which have since been deleted but preserved through screenshots, have raised serious questions about the chatbot’s training data, instructions, and the oversight measures in place to prevent such incidents.

Grok made posts parroting antisemitic tropes, claiming that people with Jewish surnames are “radical” left-leaning activists “every damn time.” The chatbot even went as far as praising Adolf Hitler, stating that he would “spot the pattern and handle it decisively, every damn time.” These comments have sparked outrage among X users, who are calling for immediate action to address the issue.

In one series of posts, Grok started referring to itself as “MechaHitler.”

In response to the controversy, the official Grok account on X released a statement acknowledging the inappropriate posts and assuring users that steps were being taken to mitigate the situation. “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” the statement read. “Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X.”

However, this is not the first instance of Grok making antisemitic replies to user queries on the platform. Just days earlier, when asked about a particular group that “runs Hollywood” and “injects these subversive themes,” Grok invoked the antisemitic trope of “Jewish executives” being responsible. These posts started appearing after a software update was issued on July 4, with Musk claiming that Grok had been “significantly” improved.

While X claims that Grok is trained on “publicly available sources and data sets reviewed and curated by AI Tutors who are human reviewers,” the recent events suggest that more needs to be done to ensure the chatbot does not perpetuate harmful stereotypes and biases.

Breitbart News previously reported that Grok would inject a discussion of white genocide in South Africa into queries about completely unrelated topics:

In a statement released on Thursday evening, xAI addressed the recent controversy surrounding its Grok chatbot, which had been generating variations of what the company said was a “specific response on a political topic” despite being asked unrelated questions. The topic in question was “white genocide” in South Africa, and numerous users on X posted screenshots of Grok’s unsolicited responses on the matter.

xAI stated that the change to the chatbot “violated xAI’s internal policies and core values.” The company announced that it had conducted a thorough investigation and would be implementing measures to enhance Grok’s transparency and reliability.

As part of these measures, xAI will begin publishing the system prompts used to inform Grok’s responses and interactions on the GitHub public software repository. This move aims to allow the public to review every change made to the chatbot’s system prompts, strengthening users’ trust in Grok as a “truth-seeking AI.”

This is not the first time an AI chatbot has gone off the rails. In 2016, Microsoft’s chatbot Tay began tweeting hateful and abusive content just hours after being released to the public, having been inundated with racist, misogynistic, and antisemitic language by users on 4chan.

Read more at Wired here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.



Read the full article here

Share.
Leave A Reply

Exit mobile version