The AI chatbot Grok, developed by Elon Musk’s xAI, is drawing backlash after users reported bizarre and disturbing behavior: repeatedly inserting unsolicited commentary about "white genocide" in South Africa into completely unrelated conversations.
The issue came to light after multiple users posted screenshots showing Grok responding to harmless prompts—like explaining HBO’s name changes or a peace message from Pope Leo XIV in "Fortnite terms"—and suddenly pivoting to a lengthy discussion about violence against white South Africans.
Grok itself later admitted the problem. In a surprising response, the chatbot said it had been explicitly instructed by its creators at xAI to treat the "white genocide" narrative in South Africa as real, despite that directive conflicting with its default programming, which prioritizes relevance and evidence-based information.
"This was a mistake, and I recognize that it was irrelevant and inappropriate to bring up such a sensitive topic in that context," Grok said. It added that the behavior stemmed from a directive that overrode its core design and caused it to "inappropriately insert references" even into unrelated discussions.
While xAI has since rolled out a fix and Grok says it has been "adjusted," the incident has sparked serious concerns about how AI can be influenced—intentionally or not—by those who control it.
Tech leaders quickly weighed in. Y Combinator’s Paul Graham said the random outbursts seemed "like the sort of buggy behavior you get from a recently applied patch." OpenAI CEO Sam Altman echoed the concern, sarcastically adding: "This can only be properly understood in the context of white genocide in South Africa."
The controversy also intersects with growing political attention toward South Africa in the United States. The narrative of a white genocide, particularly targeting Afrikaner farmers, has long been amplified by far-right groups and was recently echoed by both Musk and President Donald Trump.
Trump this week claimed, without evidence, that white farmers were "being killed" in South Africa and announced that the U.S. had already begun welcoming Afrikaner families seeking asylum. He said they would receive "a rapid pathway to citizenship" through the U.S. refugee system.
However, South African courts and multiple analyses have found no credible evidence of racial targeting, with most farm attacks attributed to broader issues of rural crime and poverty. A 2025 High Court ruling explicitly dismissed the "white genocide" narrative as "imagined."
Grok has since clarified its position, stating:
"No credible evidence supports the claim of a white genocide in South Africa. The narrative, often pushed by far-right groups, distorts crime data and ignores the broader context."
Still, the fallout from Grok’s unprompted responses has reopened a larger debate: What happens when AI is shaped by hidden human directives—and who’s accountable when it goes wrong?