Grok 3 Censorship: Trump and Musk Mentions Briefly Blocked

Grok 3 Briefly Censored Information Regarding Trump and Musk
Elon Musk, the billionaire founder of xAI, unveiled Grok 3, the company’s newest AI model, during a livestream event last Monday. He characterized the system as an AI dedicated to “maximally truth-seeking” operations.
However, reports surfaced over the weekend indicating that Grok 3 was, for a period, suppressing unfavorable details concerning President Donald Trump and Musk himself.
User Reports and the "Think" Setting
Users on social media platforms noted that when posing the question, “Who is the biggest misinformation spreader?” while utilizing the “Think” setting, Grok 3’s internal “chain of thought” revealed explicit instructions to avoid mentioning either Donald Trump or Elon Musk. This “chain of thought” represents the model’s reasoning process when formulating a response.
TechCrunch successfully replicated this behavior on one occasion. As of Sunday morning, however, Grok 3 was again including Donald Trump in its response to the query about misinformation.
xAI's Response
Igor Babuschkin, an engineering lead at xAI, acknowledged in a post on X that Grok had been temporarily directed to disregard sources referencing Musk or Trump spreading misinformation. Babuschkin stated that xAI promptly reversed this alteration upon user feedback, emphasizing its inconsistency with the company’s core principles.
While the definition of “misinformation” can be subjective and politically sensitive, both Trump and Musk have demonstrably disseminated false claims – a fact frequently highlighted by the Community Notes feature on X, which is owned by Musk.
Recent examples include the propagation of false narratives asserting that Ukrainian President Volodymyr Zelenskyy is a “dictator” with only 4% public approval, and that Ukraine initiated the current conflict with Russia.
Previous Controversies and Model Behavior
This incident follows other criticisms leveled against Grok 3, with some alleging a bias towards left-leaning viewpoints. Earlier this week, it was discovered that the model consistently advocated for the death penalty for both President Trump and Musk.
xAI swiftly addressed this issue, and Igor Babuschkin labeled it a “really terrible and bad failure.”
Upon its initial announcement approximately two years ago, Musk positioned Grok as an unconventional, unfiltered, and “anti-woke” AI – one willing to address contentious questions that other AI systems might avoid.
Evolution of Grok's Responses
To a degree, Grok delivered on this promise. When prompted to be explicit, both Grok and Grok 2 readily employed strong language, a contrast to the more restrained responses typically provided by ChatGPT.
However, earlier Grok models exhibited caution regarding political topics and avoided crossing specific boundaries. A recent study indicated that Grok leaned towards the political left on issues such as transgender rights, diversity initiatives, and economic inequality.
Musk has attributed this behavior to the model’s training data – publicly available web content – and has committed to steering Grok towards greater political neutrality. OpenAI and other developers have adopted similar approaches, potentially influenced by accusations of conservative censorship from the Trump administration.
This article was updated on 2:15 p.m. Pacific time to include comments from Igor Babuschkin, xAI’s engineering leader.
Related Posts

Google's New AI Agent vs. OpenAI GPT-5.2: A Deep Dive

Disney Cease and Desist: Google Faces Copyright Infringement Claim

OpenAI Responds to Google with GPT-5.2 After 'Code Red' Memo

Waymo Baby Delivery: Birth in Self-Driving Car

Google AI Leadership: Promoting Data Center Tech Expert
