this post was submitted on 11 Jul 2025
68 points (98.6% liked)

Fuck AI

3436 readers
852 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

There have been multiple things which have gone wrong with AI for me but these two pushed me over the brink. This is mainly about LLMs but other AI has also not been particularly helpful for me.

Case 1

I was trying to find the music video from where a screenshot was taken.

I provided o4 mini the image and asked it where it is from. It rejected it saying that it does not discuss private details. Fair enough. I told it that it is xyz artist. It then listed three of their popular music videos, neither of which was the correct answer to my question.

Then I started a new chat and described in detail what the screenshot was. It once again regurgitated similar things.

I gave up. I did a simple reverse image search and found the answer in 30 seconds.

Case 2

I wanted a way to create a spreadsheet for tracking investments which had xyz columns.

It did give me the correct columns and rows but the formulae for calculations were off. They were almost correct most of the time but almost correct is useless when working with money.

I gave up. I manually made the spreadsheet with all the required details.

Why are LLMs so wrong most of the time? Aren’t they processing high quality data from multiple sources? I just don’t understand the point of even making these softwares if all they can do is sound smart while being wrong.

you are viewing a single comment's thread
view the rest of the comments
[–] FlashMobOfOne@lemmy.world 8 points 22 hours ago (1 children)

The first time I ever used it I got a bugged response. I asked it to give me a short summary of the 2022 Super Bowl, and it told me Patrick Mahomes won the Super Bowl with a field goal kick.

Now, those two things separately are true. Mahomes won. The game was won on a field goal.

The LLM just looks at the probability that the sentences it's generating are correct based on its training data, and it smashed two correct statements together thinking that was the most probable reasonable response.

It does that a lot. Don't use GenAI without checking its output.

[–] Outwit1294@lemmy.today 7 points 20 hours ago (2 children)

I have noticed that it is terrible when you know at least a little about the topic.

[–] ZDL@lazysoci.al 1 points 55 minutes ago

Ooh! Now do the press!

[–] spankmonkey@lemmy.world 10 points 20 hours ago

Or a more accurate way to say it is AI is terrible all the time but it is easier to notice when you know at least a little about the topic.