Grok 4 is using Elon Musk's X posts
Digest more
AI's latest Grok 4 large language model appears to search for owner Elon Musk's opinions before answering sensitive questions about
On Tuesday July 8, X (née Twitter) was forced to switch off the social media platform’s in-built AI, Grok, after it declared itself to be a robot version of Hitler, spewing antisemitic hate and racist conspiracy theories. This followed X owner Elon Musk’s declaration over the weekend that he was insisting Grok be less “politically correct.”
Tech Break on MSN2h
Grok, X’s AI tool, blocked in Turkey after making more offensive commentsGrok, the artificial intelligence tool from X (formerly Twitter), was blocked in Turkey after making a series of offensive comments against the president and the country's religious values. O post Grok,
Twitter and Elon Musk's AI bot, Grok, has a major problem when it comes to accurately identifying movies and it's a big deal.
Social media posts on the X account of the Grok chatbot developed by Elon Musk’s company xAI were removed on Tuesday after complaints from X users and the Anti-Defamation League that Grok produced content with antisemitic tropes and praise for Adolf Hitler.
Explore more
Discover 10 key insights about Grok 4, the groundbreaking AI model from Elon Musk. Features, challenges and future impact.
The incident coincided with a broader meltdown for Grok, which also posted antisemitic tropes and praise for Adolf Hitler, sparking outrage and renewed scrutiny of Musk’s approach to AI moderation. Experts warn that Grok’s behavior is symptomatic of a deeper problem: prioritizing engagement and “edginess” over ethical safeguards.
The Grok debacle isn't just a tech ethics story. It’s a business, legal, and reputational risk story—one that businesses in nearly every industry shouldn’t ignore.