this post was submitted on 08 Oct 2025
1109 points (98.5% liked)

Funny

12351 readers
1440 users here now

General rules:

Exceptions may be made at the discretion of the mods.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] merc@sh.itjust.works 7 points 1 month ago (1 children)

That stops it from making stuff up

No it doesn't. That's simply not how LLMs work. They're "making stuff up" 100% of the time. If the training data is good, the stuff they're making up more or less matches the training data. If the training data isn't good, they'll make up stuff that sounds plausible.

[–] WanderingThoughts@europe.pub 2 points 1 month ago* (last edited 1 month ago) (1 children)

If you ask it for sources/links, it'll search the web and get information from the pages these days instead of only using training data. That doesn't work for everything of course. And the biggest risk is that all sites get polluted with slop so the sources become worthless over time.

[–] merc@sh.itjust.works 1 points 1 month ago

Sounds infallible, you should use it to submit cases to courts. I hear they love it when people cite things that AI tells them are factual cases.