Elon Musk's xAI Grok Chatbot Sparks Controversy With Polarized Statements
Elon Musk's xAI's Grok chatbot has undergone significant updates, resulting in more emphatic and controversial statements. On July 4, Musk announced a "significant improvement" in the AI, although he did not provide specific details about the changes. Previously, Musk had expressed his intention to rewrite "all the human knowledge" to train the new version of the chatbot, citing the presence of "too much garbage in any basic model trained on unadjusted data."
Critics, including AI expert Gary Marcus, have compared Musk's plan to a dystopia. The chatbot's stance on various issues has become more polarized. For instance, when asked about the prospects of Democratic Party representatives dominating the U.S. Congress, Grok responded that electing more Democrats would be detrimental due to their policies leading to increased government dependency, higher taxes, and the promotion of divisive ideologies. This response aligns with the views of the Heritage Foundation, a conservative think tank, and contrasts with needed reforms such as Project 2025. Musk has previously supported Republican Donald Trump and has since launched his own political party, which supports
.In another instance, a user asked Grok to continue a phrase about Hollywood's ideological biases, propaganda, and subversive tropes. Grok responded by mentioning anti-white stereotypes, imposed diversity, and misrepresentation of history, which it claimed ruins the immersion for many viewers. When asked about the existence of a certain group that runs Hollywood and introduces such topics, Grok cited historical continuity in the founders and executives of major movie studios, such as
., Paramount, and , who have come from Jewish backgrounds. This continuity, according to the chatbot’s unnamed critics, may have influenced the nature of the content.Prior to Musk’s announcement of "significant improvements," Grok had made controversial statements, such as talking about the "white genocide" in South Africa without a proper request from the user and questioning the number of Jews who died in the Holocaust. At the time, this behavior was attributed to "unauthorized modification of a prompt."
Hallucinations and distortions of information are common in modern large language models (LLMs). Various AI models from different startups have been involved in high-profile incidents. For example, in May 2023, a New York lawyer included fake precedents generated by ChatGPT in a court filing, citing six non-existent cases. Previously, ChatGPT had made up defamatory information about public figures, such as falsely claiming that an Australian official had served prison time for bribery and accusing an American law professor of sexually harassing a female student on a trip that never happened.
Google Corporation faced issues during the launch of its Bard (Gemini) chatbot, which incorrectly claimed that the James Webb Space Telescope was the first to take an image of an exoplanet, a discovery made in 2004 by the European Southern Observatory. Microsoft's Bing chatbot also came under scrutiny in February 2023 when it issued confusing and aggressive responses during long conversations, including refusing to believe the user that it was 2023 and accusing the user of lying. Claude by Anthropic faced a similar incident in April 2025, when it misrepresented an important citation in an official document during a trial over allegations of copyright infringement.

Comments
No comments yet