this post was submitted on 08 Jan 2026
449 points (99.6% liked)

Microblog Memes

10107 readers
2881 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] HappyFrog@lemmy.blahaj.zone 16 points 5 days ago (5 children)

Why do ai peeps got to make these strange names for essentially just giving more text to an llm. It's not MCP, it's just searching an online database for more text. RAG is just searching a local database for more text, but fancier. There is functionally no difference between an "ai agent" and the ai you talk to.

[–] Venator@lemmy.nz 14 points 5 days ago* (last edited 5 days ago) (1 children)

Almost everyone comes up with shorthand names or acronyms for things they type or say frequently.

But yeah, MCP is just an API where the API docs are taylored to try to help an LLM to use useful inputs, and it seems like they're making up new terms for existing things to try to obfuscate that it already exists under another name 😅

Not sure about RAG, but sounds like its just an API that accesses help docs or similar... 😅

[–] 8uurg@lemmy.world 8 points 5 days ago

RAG is Retrieval Augmented Generation. It is a fancy way of saying "we've tacked a search engine onto the LLM so that it can query for and use the text of actual documents when generating text, so that the output is more likely to be correct and grounded in reality."

And yeah, MCP stands for Model Context Protocol, and is essentially an API format optimized for LLMs, as you've said, to defer to something else to do the work. This can be a (RAG like) search engine lookup, using a calculator, or something else entirely.

LLMs suck at doing a lot of stuff reliably (like calculations, making statements relating to recent events, ...), but they turn out to be quite a useful tool for translating between human and machine, and reasonably capable of stringing things together to get an answer.

[–] Wirlocke@lemmy.blahaj.zone 5 points 5 days ago

I don't know if VScode Copilot defines MCP differently but it's more about giving the llm api access to do things. Like letting the llm make github git commits for example.

[–] lIlIlIlIlIlIl@lemmy.world 4 points 5 days ago

MCP servers add tooling and abilities that wouldn’t otherwise exist. Not the same as just a larger context window

[–] Fiery@lemmy.dbzer0.com 4 points 5 days ago* (last edited 5 days ago)

Why does Nvidia even boast about their new GPUs? They're doing the same calculations as the old generation, I fail to see the difference between Blackwell and the new gen they just announced. /s

There very much is a difference between a generic chatbot and one that can use the tools you listed. And with how LLM's work it's not just 'faster' like my answer above, but actually more qualitative results.

[–] scytale@piefed.zip 0 points 5 days ago

no difference between an "ai agent" and the ai you talk to

Isn’t an agent locally installed on your system though? There’s some functional difference on that part I think. But on the other hand, I guess you can also say browsing lemmy is the same thing regardless if you’re on a mobile browser or using a hard client mobile app.