this post was submitted on 11 Nov 2025
168 points (99.4% liked)
Artificial Ignorance
256 readers
1 users here now
In this community we share the best (worst?) examples of Artificial "Intelligence" being completely moronic. Did an AI give you the totally wrong answer and then in the same sentence contradict itself? Did it misquote a Wikipedia article with the exact wrong answer? Maybe it completely misinterpreted your image prompt and "created" something ridiculous.
Post your screenshots here, ideally showing the prompt and the epic stupidity.
Let's keep it light and fun, and embarrass the hell out of these Artificial Ignoramuses.
All languages welcome, but an English explanation would be appreciated to keep a common method of communication. Maybe use AI to do the translation for you...
founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think you have severe misunderstanding of what this community is.
I...assumed it was a community to point where AI should would, but doesn't. In the example we have here its not a flaw of the LLM, instead what is being asked of it is beyond its limits.
I don't make fun of my screwdriver because its horrible and hammering in nails. If that's what this community is for, then the mistake is mine to post in here. My apologies.
...and that's what's happening in this case. You're acting like it's completely impossible for an LLM to go down a path where it handles that the question contained a misspelling because it isn't AGI. In fact, to be useful an LLM should hand this better. It certainly shouldn't start making up weird unrelated connections.
Also, it's not impossible, and I guarantee that some LLMs would give a more appropriate answer. But this particular LLM couldn't handle it, and went completely off the rails. Why are we not allowed to make fun of that? Why are you defending it from ridicule?
Holy strawman. We aren't asking the LLM to be a different tool. The LLM is supposed to handle language, and a simple misspelling of a homophone caused it to misunderstand the question completely and sent it down a path of calling completely different words "homophones". Yeah I wouldn't make fun of my screwdriver for not being able to hammer nails, but I would be pretty annoyed if it constantly slipped due to slight imperfections in how screws were manufactured.
I started typing out a point by point response to your post. You have many things wrong in your post, but you've already communicated to me that this place isn't for discussion about how LLMs work or their underlying limits. I respect this is your Lemmy Community and I have no intention of coming into your club house and crapping all over your hobby in whatever way you define it. This is your space and I will play by your rules, and take my criticisms with me on my way out.
If I've misunderstood and you want me to respond to your post, I'm happy to do so, but I won't without your permission.
Go ahead, I'd love to see what you have to say. I'd much prefer that to an arrogant implication of my stupidity.
Not knowing how the underlying technology works isn't stupidity, but I can get from your tone you're spoiling for a fight and not interested in an friendly exchange of ideas. As I said, I'm not here to create drama in your community. I'll step away. I hope you have a great day.
Well, I told you to break it down and explain it, but instead you just continue to be condescending. I thought maybe calling out your arrogance would get you to check yourself, but it did not.
I can prove to you that other LLMs don't make the same error, so please explain how it's the equivalent of using a screwdriver to hammer nails to misspell a word in a question to an LLM. And then explain why it's wrong to point out errors made by LLMs. Or if I've missed something about what you were going to break down point by point, please explain.
And just so we're clear, I do have a degree in computer science, extensive experience with machine learning, and probably know more than you think about how LLMs work. Maybe I don't know as much as you, there's no way for me to know that, but stop talking to me like I'm a child.