Why are LLMs so wrong most of the time? Aren’t they processing high quality data from multiple sources?
Well that's the thing. LLMs don't generally "process" data as humans would. They don't understand the text they're generating. So they can't check their answers against reality.
(Except for Grok 4, but it's apparently checking its answers to make sure they agree with Elon Musk's Tweets, which is kind of the opposite of accuracy.)
I just don’t understand the point of even making these softwares if all they can do is sound smart while being wrong.
As someone who lived through the dotcom boom of the 2000s, and the crypto booms of 2017 and 2021, the AI boom is pretty obviously yet another fad. The point is to make money - from both consumers and investors - and AI is the new buzzword to bring those dollars in.