Can AI Chatbots Like Grok and ChatGPT Be Trusted for Fact-Checking in 2025?

Fact check: How trustworthy are AI fact checks? – DW – 05/17/2025

It’s a question that’s popping up more and more on X (formerly Twitter): “Hey, @Grok, is this true?” Ever since Elon Musk’s xAI launched its chatbot Grok in November 2023—and opened it up to all non-premium users by December 2024—thousands have been turning to it for instant answers. But can we trust those answers?

Let’s break it down in plain language, fact-check style.

💡 Why More People Are Replacing Google with AI

A recent TechRadar survey revealed something big: 27% of Americans now use AI tools like ChatGPT, Meta AI, Google Gemini, Microsoft Copilot, or Perplexity instead of Google or Yahoo for online searches. That’s a major shift—and it shows how people are craving faster, more conversational answers.

But speed isn’t everything. Is the information accurate? Reliable? Free of bias? These are the big questions.

⚠️ Grok’s Odd Response Sparks Concerns

Here’s where things get weird.

A user asked Grok a simple question about HBO—but instead of an answer about TV shows, the chatbot started talking about “white genocide” in South Africa. Understandably, that raised a lot of eyebrows. Not only was the response irrelevant, but it also veered into a sensitive, debunked conspiracy theory known as the “Great Replacement.”

So, what happened? According to xAI, it was due to an “unauthorized modification” in the system. They say they’ve fixed it. But the damage to public trust may already be done.

🧠 What the Experts Found: AI Isn’t 100% Reliable

To get a clearer picture, two major studies dug deep into how well these chatbots actually report facts:

1. BBC’s AI Accuracy Study (Feb 2025)

When researchers asked AI assistants like ChatGPT, Gemini, Copilot, and Perplexity to answer questions using BBC articles as sources:

  • 51% of the responses had serious issues
  • 19% contained added factual errors
  • 13% included quotes that were altered or completely made up

The conclusion? “AI assistants cannot currently be relied upon to provide accurate news,” said Pete Archer, head of the BBC’s Generative AI Program.

2. Tow Center Study (U.S.)

This study echoed the BBC’s findings, especially highlighting how generative AI tools sometimes distort facts—even when citing reputable sources.

🧩 So, What’s the Takeaway Here?

AI is evolving fast. It’s impressive, powerful, and definitely convenient. But as of now, you shouldn’t treat AI responses as gospel—especially when it comes to complex or controversial topics.

That doesn’t mean you should stop using AI tools like Grok or ChatGPT altogether. Just treat them like a helpful assistant, not a final authority. Always double-check with credible sources when it really matters.

🔍 Featured Snippet Summary:

Can AI chatbots be trusted for fact-checking? Not entirely. Studies by the BBC and Tow Center reveal that tools like Grok, ChatGPT, and Gemini often provide inaccurate or misleading information, with up to 51% of answers showing factual or contextual errors. Experts recommend verifying sensitive topics using credible sources.

✅ Quick Tips for Using AI Responsibly

  • Always double-check AI answers with reputable news outlets.
  • Don’t rely on AI for breaking news or politically sensitive topics.
  • Watch for hallucinations—AI sometimes makes things up!
  • Use AI as a starting point, not the final word.

Final Thoughts

AI is changing how we access information, and it’s exciting. But with great power comes great responsibility—on both ends. As users, we need to be curious, cautious, and informed.

So next time you type “Hey, @Grok, is this true?”—just remember: it might be a good idea to ask a journalist too.

#AIChatbots #FactChecking #GrokAI #AIMisinformation #TechNews2025