
A groundbreaking international study by the European Broadcasting Union (EBU) and the BBC has revealed that popular AI assistants like ChatGPT, Google Gemini, Microsoft Copilot, and Perplexity misrepresent news events almost half the time.
The report, titled “News Integrity in AI Assistants“, is the most extensive evaluation yet of how artificial intelligence handles real-world news across different countries and languages.
Nearly Half of AI News Answers Are Flawed
The study analyzed over 2,700 responses from major AI chatbots, generated between late May and early June 2025. These responses were collected by 22 public media organizations from 18 countries, covering 14 languages.
Shockingly, 45% of all AI-generated answers contained at least one significant issue. Moreover, 81% showed some form of distortion, sourcing error, or factual inaccuracy.
The report identified three key problem areas:
- Sourcing errors (31%): AI models often misquoted or cited incorrect sources.
- Factual inaccuracies (20%): Many responses contained outdated or false information.
- Missing context (14%): Some lacked the full background needed to understand events correctly.
Among all the platforms, Google Gemini performed the worst, with 76% of its responses containing major issues.
The EBU described the issue as “systemic distortion of news” that remains consistent across languages and territories. This means AI’s errors are not limited to English or any specific market but occur globally.
The study also noted that AI chatbots struggle most with fast-moving or complex news stories, where accuracy and context are crucial. Simpler factual questions performed better but still showed inconsistencies.
Experts Call for Transparency and Media Literacy
In the report’s foreword, EBU Deputy Director-General Jean Philip De Tender and BBC’s Head of AI Pete Archer urged tech companies to act immediately. They wrote:
“Tech companies have not prioritized this issue and must do so now. They also need to be transparent by regularly publishing their results by language and market.”
Media scholars echoed similar concerns. Jonathan Hendrickx, an assistant professor at the University of Copenhagen, warned that the rise of AI-generated misinformation demands stronger media literacy education. He stated:
“Consumers must learn to question AI-provided information from a young age to avoid falling for inaccurate narratives.”
The findings raise serious questions about the reliability of AI tools increasingly used to summarize or explain current events. If chatbots continue to distort information at such a scale, public trust in news and digital media could erode even further.
The EBU emphasized that safeguarding factual journalism in the age of AI requires cooperation between news organizations, AI developers, and educators. As De Tender noted:
“When people don’t know what to trust, they end up trusting nothing at all.”
Viewers can access the full study from here: “News Integrity in AI Assistants Report 2025[1]“
References
- ^ News Integrity in AI Assistants Report 2025 (www.ebu.ch)