Grok, Elon Musk
Digest more
AI's latest Grok 4 large language model appears to search for owner Elon Musk's opinions before answering sensitive questions about
It claimed to just be “noticing patterns” — patterns like, Grok claimed, that Jewish people were more likely to be radical leftists who want to destroy America. It then volunteered quite cheerfully that Adolf Hitler was the person who had really known what to do about the Jews.
Discover 10 key insights about Grok 4, the groundbreaking AI model from Elon Musk. Features, challenges and future impact.
After Grok took a hard turn toward antisemitic earlier this week, many are probably left wondering how something like that could even happen.
On Tuesday July 8, X (née Twitter) was forced to switch off the social media platform’s in-built AI, Grok, after it declared itself to be a robot version of Hitler, spewing antisemitic hate and racist conspiracy theories. This followed X owner Elon Musk’s declaration over the weekend that he was insisting Grok be less “politically correct.”
Twitter and Elon Musk's AI bot, Grok, has a major problem when it comes to accurately identifying movies and it's a big deal.
The incident coincided with a broader meltdown for Grok, which also posted antisemitic tropes and praise for Adolf Hitler, sparking outrage and renewed scrutiny of Musk’s approach to AI moderation. Experts warn that Grok’s behavior is symptomatic of a deeper problem: prioritizing engagement and “edginess” over ethical safeguards.
The Grok debacle isn't just a tech ethics story. It’s a business, legal, and reputational risk story—one that businesses in nearly every industry shouldn’t ignore.