Elon Musk has launched yet another ambitious tech project with Grokipedia, an AI-powered encyclopedia meant to serve as a less biased alternative to Wikipedia. However, researchers at Cornell Tech have raised serious concerns about Grokipedia’s reliability, citing extensive trustworthiness issues in its source citations just days after its debut. This blog post explores the findings of the recent Cornell Tech study, the controversial sources Grokipedia references, and the criticism the platform has drawn from the expert community.
What is Grokipedia?
Announced by Musk’s xAI company, Grokipedia is an AI-generated encyclopedia aiming to provide factual and neutral information by leveraging artificial intelligence. Musk has repeatedly criticized Wikipedia for allegedly containing “propaganda,” positioning Grokipedia as a more credible resource free from traditional editorial bias. Nevertheless, its content appears to heavily draw from Wikipedia, sparking debate over its originality and independence.
Troubling Findings from Cornell Tech Researchers
A comprehensive analysis published on arXiv by researchers Harold Triedman and Alexios Mantzarlis from Cornell Tech’s Security, Trust, and Safety Initiative casts doubt over Grokipedia’s credibility. The study analyzed nearly the entire Grokipedia database — 883,858 articles — mere days after the platform’s October 27, 2025 launch.
The shocking revelation: Grokipedia cites over 2.6 million sources that Wikipedia has flagged as unreliable, blacklisted, or deprecated. This accounts for 6% of Grokipedia’s total citations—double the corresponding rate on Wikipedia. The implication is significant; relying on less credible sources undermines Grokipedia’s claim to accurate, unbiased knowledge.
Fringe and Controversial Websites Cited Extensively
Even more concerning, Grokipedia references 180 websites that Wikipedia editors avoid altogether. These include highly controversial sources:
- Stormfront: a notorious white nationalist forum, cited 42 times
- InfoWars: a conspiracy theory website, cited 34 times
- Natural News: an anti-vaccine platform, cited multiple times
Grokipedia cites these sources without labeling or qualifying their unreliability, further endangering users seeking trustworthy information. Compared with Wikipedia, Grokipedia has an 86% increase in sources classified as “generally unreliable” and a 275% increase in “blacklisted” sources — yet only 7.7% of its citations are from generally reliable outlets, a decrease from Wikipedia’s standards.
Criticism from the Wikimedia Foundation and Experts
The Wikimedia Foundation, the non-profit behind Wikipedia, responded by highlighting Grokipedia’s dependence on Wikipedia content despite its “less biased” promises. They stressed, “Even Grokipedia needs Wikipedia to exist,” underlining that Grokipedia’s AI-generated content largely pulls from Wikipedia’s data.
Attempts to obtain a comment from Musk’s xAI company resulted in an automated statement dismissing criticism as “Legacy Media lies,” which many perceive as sidestepping accountability.

Implications for Readers and Researchers
The rise of AI-generated knowledge bases like Grokipedia represents both opportunities and risks for information seekers. On one hand, AI can synthesize and present vast knowledge more fluently. On the other, as this study reveals, the quality of sources and editorial oversight remain crucial to safeguard accuracy.
Readers should approach Grokipedia and similar platforms with caution, especially considering the proliferation of fringe sites cited without transparency. Cross-referencing information with established, credible sources remains essential.
Conclusion
Elon Musk’s Grokipedia ambition to reinvent the encyclopedia with AI is noble but faces significant challenges. The Cornell Tech study unearths major reliability issues stemming from questionable source citations and dependence on Wikipedia content. Until Grokipedia can address these concerns and improve source vetting, users should remain vigilant in verifying the information it provides.
As the digital knowledge landscape evolves, nurturing trustworthy AI tools alongside strong editorial standards will be key to informed public discourse.
Photo by ThisisEngineering and John on Unsplash