this post was submitted on 13 May 2025
459 points (100.0% liked)
TechTakes
1869 readers
160 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Well grep doesn't hallucinate things that are not actually in the logs I'm grepping so I think I'll stick to grep.
(Or ripgrep rather)
Hallucinations become almost a non issue when working with newer models, custom inference, multishot prompting and RAG
But the models themselves fundamentally can't write good, new code, even if they're perfectly factual
@vivendi @V0ldek * hallucinations are a fundamental trait of LLM tech, they're not going anywhere
God, this cannot be overstated. An LLM’s sole function is to hallucinate. Anything stated beyond that is overselling.