this post was submitted on 11 Nov 2025
168 points (99.4% liked)

Artificial Ignorance

256 readers
1 users here now

In this community we share the best (worst?) examples of Artificial "Intelligence" being completely moronic. Did an AI give you the totally wrong answer and then in the same sentence contradict itself? Did it misquote a Wikipedia article with the exact wrong answer? Maybe it completely misinterpreted your image prompt and "created" something ridiculous.

Post your screenshots here, ideally showing the prompt and the epic stupidity.

Let's keep it light and fun, and embarrass the hell out of these Artificial Ignoramuses.

All languages welcome, but an English explanation would be appreciated to keep a common method of communication. Maybe use AI to do the translation for you...

founded 11 months ago
MODERATORS
 

Sent to me by a friend, don't judge the misspelling of "strait" lol.

you are viewing a single comment's thread
view the rest of the comments
[–] BananaTrifleViolin@lemmy.world 22 points 1 week ago (2 children)

The AI did spot it, and started spewing nonsense because it's shit. It looks like it was trying to write about straight vs strait but was unable to resolve that into actual correct text and instead spewed nonsense about straight and "a sound" being homophones.

Problem is there will be people lapping up this nonsense or more subtle errors. AI is alpha software at best and it's crazy how it's being pushed onto users.

[–] samus12345@sh.itjust.works 6 points 1 week ago

straight vs strait

Ah, thank you, I couldn't figure out where it assembled that nonsense from.

[–] zout@fedia.io 4 points 1 week ago (1 children)

So you probably already know this, but the AI wasn't trying to write about anything, since it works without intent. It predicts the most likely combination of words in reply to your prompt. Since this is probably not a very common question which becomes implausible due to the spelling error, the AI doesný have anything to go on and it returns a combination of words that may be the most likely correct according to the model, but with a low probability of actually being correct.

[–] howrar@lemmy.ca 14 points 1 week ago (1 children)

A LLM is without intent as much as a motor is without intent. But if you block it from doing its job, we'd still say that it's "trying to spin". What would you propose as an alternative to "trying"?

[–] samus12345@sh.itjust.works 6 points 1 week ago

"Trying" is fine, "attempting" could also be used. I've never heard that there needed to be intent behind trying something, only an underlying directive, as you said.