this post was submitted on 18 Feb 2026
504 points (96.8% liked)

Fuck AI

5920 readers
2565 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Unavailable at source.

you are viewing a single comment's thread
view the rest of the comments
[–] jackr@lemmy.dbzer0.com 1 points 3 hours ago

The study of this in academia

you are linking to an arxiv preprint. I do not know these researchers. there is nothing that indicates to me that this source is any more credible than a blog post.

has found that LLM hallucination rate can be dropped to almost nothing

where? It doesn't seem to be in this preprint, which is mostly a history of RAG and mentions hallucinations only as a problem affecting certain types of RAG more than other types. It makes some relative claims about accuracy that suggest including irrelevant data might make models more accurate. It doesn't mention anything about “hallucination rate being dropped to almost nothing”.

(less than a human)

you know what has a 0% hallucination rate about the contents of a text? the text

You can see in the images I posted that it both answered the question and also correctly cited the source which was the entire point of contention.

this is anecdotal evidence, and also not the only point of contention. Another point was, for example, that ai text is horrible to read. I don't think RAG(or any other tacked-on tool they've been trying for the past few years) fixes that.