×
BBC study finds AI assistants deliver wrong news 45% of the time
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A major international study led by the BBC found that nearly half of all AI assistant responses about news contain significant errors, with Google’s Gemini performing worst at a 76% error rate. The research, which analyzed over 3,000 responses across ChatGPT, Microsoft Copilot, Google Gemini, and Perplexity in 14 languages and 18 countries, reveals systematic problems with how AI tools process and deliver news information to users.

The big picture: Professional journalists from 22 public media outlets evaluated AI responses and discovered that 45% had significant issues, 31% had sourcing problems, and 20% were simply inaccurate.

  • The problems weren’t isolated incidents but represented “deep, structural issues with how these assistants process and deliver news, regardless of language, country, or platform.”
  • Unlike Google search results that allow users to review multiple sources, AI chatbot responses “often feel final” and “read with authority and clarity, giving the impression that it’s been fact-checked and edited.”

Key findings: Google Gemini performed significantly worse than competitors, with three-quarters of its responses containing major problems.

  • Gemini “misfired in a staggering 76% of responses, mostly due to missing or poor sourcing.”
  • Other AI assistants also struggled with attribution, sometimes citing quotes “to outlets that hadn’t published anything even close to what was being cited.”
  • In some languages, the assistants “outright hallucinated details,” while in others they provided “simplistic or misleading overviews instead of crucial nuance.”

Why this matters: AI assistants are rapidly becoming a primary news source, particularly among younger users.

  • The 2025 Reuters Institute’s Digital News Report estimates that 7% of all online news consumers now use AI assistants for information, with 15% of those under 25 relying on these tools.
  • The fluent delivery of incorrect information makes detection difficult, as “these tools are often wrong with such fluency that it doesn’t feel like a red flag.”

What they’re doing about it: The European Broadcasting Union and its partners released a “News Integrity in AI Assistants Toolkit” to address these issues.

  • The toolkit serves as “an AI literacy starter pack designed to help developers and journalists alike.”
  • It outlines “both what makes a good AI response and what kinds of failures users and media watchdogs should be looking for.”

What experts recommend: The study emphasizes the need for media literacy and ongoing scrutiny of AI-generated news content.

  • Users should “check your sources, and stick to the most reliable ones” rather than accepting AI responses at face value.
  • The research suggests AI news responses “should come with a disclaimer” given the current error rates.
Think you can trust ChatGPT and Gemini to give you the news? Here's why you might want to think again

Recent News

University of Illinois launches CropWizard AI for real-time farming guidance

Academic institutions are now driving practical AI applications into America's oldest industry.

Lawyer faces sanctions for using AI to fabricate 22 legal citations

Judge calls AI legal research "a game of telephone" requiring verification with original sources.