this post was submitted on 25 Mar 2026
332 points (98.5% liked)

Not The Onion

20977 readers
2105 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, ableist, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] quick_snail@feddit.nl 2 points 7 hours ago (3 children)
[–] selfAwareCoder@programming.dev 2 points 6 hours ago* (last edited 6 hours ago)

I don't think it will make enough difference, but RAG stands for Retrieval Augmented Generation.

There's a few ways to do it, but basically it's a way add extra information to the conversation. By default the model only knows what it generates, plus what is in the conversation. RAG adds extra information to the mix.

The simplest approach is to scan the conversation for keywords and add information based on them.

So you ask "what is the capital of France" and instead of the model answering/hallucinating by itself, your app could send the full Wikipedia page for France along with your question, and the model will almost always return the correct answer from the Wikipedia page and hallucinate much less. In practice it gets a lot more complicated and I'm not up to date on recent RAG but the idea is the same.

[–] Crozekiel@lemmy.zip 2 points 6 hours ago (1 children)

It's a chat bot that googles your question before answering in the hopes to cut down on hallucinations. It doesn't solve this problem at all.

[–] Bazell@lemmy.zip 1 points 5 hours ago* (last edited 5 hours ago)

Your explanation is not completely correct. More correct explanation would be: an AI chatbot that has an ability to gather relatable info to the user input from internal or external sources allowing the AI model to answer more precisely on questions even if the model wasn't trained on this data at all. This lowers the amount and degree of hallucinations to some point but doesn't eliminate them.

[–] Bazell@lemmy.zip 1 points 5 hours ago* (last edited 5 hours ago)

A separate subsystem for an AI chatbot that allows it to get related to the user input information from text files(database) without scanning it all each time or having as an input to the promt, thus reducing hallucinations since instead of telling you something "from the head" it has an input in the form like this: user_input+info_content+memory.

Despite RAG being really helpful in many ways it doesn't eliminate hallucinations completely. Only lowers them to some point.