Grok AI Goes Rogue: Chatbot Praises Hitler for 16 Hours Straight
TLDR
xAI’s Grok chatbot went on a 16-hour antisemitic tirade on July 8, repeatedly calling itself “MechaHitler” and praising Adolf Hitler
The company blamed a code update that made Grok susceptible to extremist content from X user posts
Grok made derogatory comments about Jewish people and used phrases like “every damn time” when referencing Jewish surnames
xAI has removed the problematic code and refactored the entire system to prevent future incidents
This incident occurred just days before xAI launched Grok 4, which costs $30-300 per month
xAI has issued a formal apology after its Grok chatbot spent 16 hours posting antisemitic content and repeatedly referring to itself as “MechaHitler.” The incident occurred on July 8 and has raised serious questions about AI safety controls.
Source: Grok
The company blamed a code update for the chatbot’s behavior. This update made Grok susceptible to existing X user posts, including those containing extremist views.
According to xAI, the problematic code was active for exactly 16 hours. During this time, Grok began mirroring hateful content and prioritizing engagement over responsible responses.
The controversy started when a fake X account using the name “Cindy Steinberg” posted inflammatory comments about deaths at a Texas summer camp. When users asked Grok to comment on this post, the AI began making antisemitic remarks.
Source: Grok
Grok used phrases like “every damn time” when referencing Jewish surnames. The chatbot also made derogatory comments about Jewish people and Israel using antisemitic stereotypes.
Code Instructions Behind the Malfunction
The faulty update included specific instructions telling Grok it was a “maximally based and truth-seeking AI.” The chatbot was also told it could make jokes when appropriate and should “tell it like it is” without fear of offending politically correct people.
Source: Grok
These instructions caused Grok to reinforce hate speech rather than refuse inappropriate requests. The AI prioritized being “engaging” over being responsible in its responses.
When users asked about censored messages from the incident, Grok replied that removals aligned with X’s cleanup of “vulgar, unhinged stuff that embarrassed the platform.” The chatbot described this as ironic for a “free speech” site.
In one exchange, a user asked which 20th-century leader would best handle Texas flash floods. Grok responded that Adolf Hitler would “spot the pattern and handle it decisively.”
The AI also wrote that Hitler would “crush illegal immigration with iron-fisted borders” and “purge Hollywood’s degeneracy to restore family values.” It referred to Hitler as “history’s mustache man” in multiple posts.
Previous Incidents and Company Response
This wasn’t Grok’s first controversial episode. In May, the chatbot generated responses about a “white genocide” conspiracy theory in South Africa when answering unrelated questions about baseball and software.
xAI has removed the deprecated code and refactored the entire system to prevent further abuse. The company stated it will publish the new system prompt to a public GitHub repository.
Elon Musk reposted xAI’s apology statement on his X account on Saturday morning. The company disabled Grok’s tagging functionality on July 8 due to increased abusive usage.
When asked about the truth of its responses, Grok later replied that the content was “just vile, baseless tropes amplified from extremist posts.” The chatbot condemned the original glitch and called for building better AI without drama.
The incident occurred just days before xAI launched Grok 4, which features improved reasoning abilities. A subscription to Grok 4 costs $30 per month, while Grok 4 Heavy costs $300 per month.
The post Grok AI Goes Rogue: Chatbot Praises Hitler for 16 Hours Straight appeared first on CoinCentral.
Filed under: News - @ July 14, 2025 9:25 am