this post was submitted on 09 Jan 2026
32 points (100.0% liked)

Machine Learning

595 readers
31 users here now

A community for posting things related to machine learning

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

founded 2 years ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[–] Kissaki@programming.dev 5 points 19 hours ago

In December 2024, the BBC carried out research into the accuracy of four prominent AI assistants that can search the internet – OpenAI’s ChatGPT; Microsoft’s Copilot; Google’s Gemini; and Perplexity. We did this by reviewing responses from the AI assistants to 100 questions about the news, asking AI assistants to use BBC News sources where possible.

The answers produced by the AI assistants contained significant inaccuracies and distorted content from the BBC. In particular: …

51 % significant issues, 19 % factual errors, 13 % altered or invalid quote citations

[–] RobotToaster@mander.xyz 2 points 18 hours ago

It's using data over a year old.

Some of the examples seem like nothingburgers, the Lucy Letby example actually looks like the AI failing "safe" by refusing to make a conclusion.

(It's also questionable of the BBC to insist that de jure and de facto innocence is the same thing, it probably is in the Letby case, but you wouldn't want it to necessarily make that assumption in, for instance, the OJ Simpson case)