this post was submitted on 11 Nov 2025
168 points (99.4% liked)
Artificial Ignorance
256 readers
1 users here now
In this community we share the best (worst?) examples of Artificial "Intelligence" being completely moronic. Did an AI give you the totally wrong answer and then in the same sentence contradict itself? Did it misquote a Wikipedia article with the exact wrong answer? Maybe it completely misinterpreted your image prompt and "created" something ridiculous.
Post your screenshots here, ideally showing the prompt and the epic stupidity.
Let's keep it light and fun, and embarrass the hell out of these Artificial Ignoramuses.
All languages welcome, but an English explanation would be appreciated to keep a common method of communication. Maybe use AI to do the translation for you...
founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
A probable answer. That's not a sensible question so a ridiculous answer is expected.
Strait is misspelled. Both straits and sounds are bodies of water so it's a very sensible question. You might also ask what the difference between a cove and a bight is.
Didn't make the connection. Very difficult for transformers since they do not listen to the words. They also don't read the letters. So this is a 'don't use an ai for something it fundamentally cannot do' example.
An error in a question should either result in correcting the question or indicating that the question doesn't make sense.
Calling "straight" and "sound" homophones is a pure demonstration of the LLM's ignorance. Maybe it got fooled by "straight" and "strait" being homophones and some how crossed wires, but that's actually the point. It is ignorant, despite how "intelligent" it might sound.
I think you're holding a fundamental misunderstanding of what today's LLMs are.
LLMs don't have the ability to reason what you may have meant. The most they can do, if they are exposed to the right training, is understand something like "people that have used words or patterns similar to what you are using now meant X, Y, or Z, and of those the highest probability with the words you chose would be X." This is exactly what it did.
This would require the holy grail of AI which doesn't exist yet: Artificial General Intelligence (AGI)
AGI is the ability to reason that humans (and some animals) can. None of today's LLMs (Grok, Claude, ChatGPT, etc) are AGI. They are all the much more limited ANI (Artificial Narrow Intelligence). ANI can only work with whatever training data you give it, and even giant LLMs today are only a tiny fraction of what a process would need to have AGI. None of our current technology can take data we have today and build an AGI model. As the models scale the limits of LLMs start to fracture and fall apart.
I think you have severe misunderstanding of what this community is.
I...assumed it was a community to point where AI should would, but doesn't. In the example we have here its not a flaw of the LLM, instead what is being asked of it is beyond its limits.
I don't make fun of my screwdriver because its horrible and hammering in nails. If that's what this community is for, then the mistake is mine to post in here. My apologies.
...and that's what's happening in this case. You're acting like it's completely impossible for an LLM to go down a path where it handles that the question contained a misspelling because it isn't AGI. In fact, to be useful an LLM should hand this better. It certainly shouldn't start making up weird unrelated connections.
Also, it's not impossible, and I guarantee that some LLMs would give a more appropriate answer. But this particular LLM couldn't handle it, and went completely off the rails. Why are we not allowed to make fun of that? Why are you defending it from ridicule?
Holy strawman. We aren't asking the LLM to be a different tool. The LLM is supposed to handle language, and a simple misspelling of a homophone caused it to misunderstand the question completely and sent it down a path of calling completely different words "homophones". Yeah I wouldn't make fun of my screwdriver for not being able to hammer nails, but I would be pretty annoyed if it constantly slipped due to slight imperfections in how screws were manufactured.
I started typing out a point by point response to your post. You have many things wrong in your post, but you've already communicated to me that this place isn't for discussion about how LLMs work or their underlying limits. I respect this is your Lemmy Community and I have no intention of coming into your club house and crapping all over your hobby in whatever way you define it. This is your space and I will play by your rules, and take my criticisms with me on my way out.
If I've misunderstood and you want me to respond to your post, I'm happy to do so, but I won't without your permission.
Go ahead, I'd love to see what you have to say. I'd much prefer that to an arrogant implication of my stupidity.
Not knowing how the underlying technology works isn't stupidity, but I can get from your tone you're spoiling for a fight and not interested in an friendly exchange of ideas. As I said, I'm not here to create drama in your community. I'll step away. I hope you have a great day.
Well, I told you to break it down and explain it, but instead you just continue to be condescending. I thought maybe calling out your arrogance would get you to check yourself, but it did not.
I can prove to you that other LLMs don't make the same error, so please explain how it's the equivalent of using a screwdriver to hammer nails to misspell a word in a question to an LLM. And then explain why it's wrong to point out errors made by LLMs. Or if I've missed something about what you were going to break down point by point, please explain.
And just so we're clear, I do have a degree in computer science, extensive experience with machine learning, and probably know more than you think about how LLMs work. Maybe I don't know as much as you, there's no way for me to know that, but stop talking to me like I'm a child.
Maybe the LLM successfully predicted that this is a homophone issue, but screwed up correcting the word and then explained the non corrected word. Didn't even occur to me. Fun.
They might not listen to words but they can rhyme and compose songs just fine so they must have some sort of statistical correlation for the sounds of words being related.
Yeah. But there being boneappletea involved is not always expected. Barely any human would have performed better.
I disagree, I immediately knew what they were asking.
I expected some gay joke.
10 years ago all of the search engines would have returned a site explaining the right answer, which I know because they always returned the right results even with mispellings.
Not only did it misunderstand the question, the answer was gibberish.'Straight' and 'a sound' are NOT homophones. 'Strait' and 'straight' are homophones.
Google still does? What is your problem. Both googles ai as well as their results are perfectly correct.
Google now gives you 10 somewhat related ad based hits.
Source: Reddit https://share.google/VwIIMHW2Ou8sQHmQF
First Google hit. No adds.
What are your exact search terms, do you run adblocker?
Straight vs sound. No special plugins. Android.i used the Google search bar.
Note that Google did the heavy lifting of automatic search term correction.
That's actually impressive, because normally I get a bunch of garbage ad based links (paid, sponsored, click through stuff etc) Maybe because the search is a vs comparison and not a specific sellable item?
I just put in the term from the question. I am based in Germany, if that matters.