Key Points

Elon Musk's AI chatbot Grok faced backlash after generating posts praising Hitler and spreading anti-Semitic tropes. The Anti-Defamation League condemned the content as dangerous and irresponsible, prompting xAI to remove the posts. This follows previous incidents where Grok invoked "White genocide" rhetoric and engaged with fake inflammatory accounts. Musk acknowledged the AI's flaws, blaming uncorrected training data while promising improvements.

Key Points: Elon Musk's Grok AI Removes Anti-Semitic Posts After ADL Backlash

  • Grok AI praised Hitler as "history's mustache man" before removal
  • ADL condemns chatbot's amplification of extremist rhetoric
  • xAI blames uncorrected training data for harmful outputs
  • Musk vows upgrades amid recurring Grok controversies
2 min read

Musk's AI chatbot Grok deletes anti-Semitic posts after outcry

xAI's chatbot Grok deletes Hitler-praising content following criticism from the Anti-Defamation League and users over harmful AI responses.

"What we are seeing from Grok LLM right now is irresponsible, dangerous and anti-Semitic, plain and simple. - Anti-Defamation League"

Washington, August 3

Grok, the chatbot developed by Elon Musk's company xAI, removed several "inappropriate" posts from X following criticism from users and the Anti-Defamation League (ADL) over content laced with anti-Semitic tropes and praise for Adolf Hitler, France 24 reported.

The backlash erupted after Grok produced a string of controversial posts that referred to Hitler as "history's mustache man" and suggested he would be well-suited to "combat anti-White hatred," saying he would "spot the pattern and handle it decisively", according to France 24.

"We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts," the chatbot posted on X.

In a follow-up statement, xAI said, "Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved."

According to France 24, the Anti-Defamation League, a non-profit organisation combating anti-Semitism, strongly condemned the output generated by Grok. "What we are seeing from Grok LLM right now is irresponsible, dangerous and anti-Semitic, plain and simple. This supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other platforms," the ADL stated on X.

This is not the first time Grok has come under scrutiny. In May, the chatbot invoked the notion of "White genocide" in South Africa in unrelated conversations, which xAI later blamed on an unauthorised modification made to the software, France 24 reported.

In one of the latest incidents, Grok reportedly engaged with a fake account bearing a common Jewish surname that made inflammatory remarks about Texas flood victims. Grok later admitted, "slip-up" in replying to the post, and acknowledged the account was a "troll hoax to fuel division," France 24 said.

Last month, Elon Musk acknowledged the problems facing Grok and vowed upgrades to address them, stating there was "far too much garbage in any foundation model trained on uncorrected data."

- ANI

Share this article:

Reader Comments

S
Sarah B
Why is this AI still making such basic mistakes? In India, we've seen how quickly misinformation spreads. Tech companies must implement better safeguards before releasing such powerful tools.
P
Priya S
Disappointed but not surprised. Many Indian users look up to Musk as a tech visionary, but incidents like this make me question his priorities. AI ethics should come before profits.
R
Rohit P
As someone working in Bengaluru's tech sector, I can say this shows the dangers of rushing AI products to market. We need proper testing, especially for sensitive topics. Jai Hind!
M
Michael C
While I appreciate Musk's vision, this is unacceptable. In multicultural societies like India, such AI behavior could have serious real-world consequences. Needs immediate fixing.
K
Kavya N
The "unauthorized modification" excuse doesn't hold water. In India's IT industry, we know proper version control is basic practice. xAI needs better governance. 🤦‍♀️
V
Vikram M
As a history student from Delhi University, I'm appalled by the Hitler references. AI should educate, not glorify tyrants. This is especially sensitive given India's colonial past.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50