this post was submitted on 16 Jan 2025
275 points (93.7% liked)

memes

21084 readers
3296 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/Ads/AI SlopNo advertisements or spam. This is an instance rule and the only way to live. We also consider AI slop to be spam in this community and is subject to removal.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
 
all 14 comments
sorted by: hot top controversial new old
[–] brucethemoose@lemmy.world 22 points 1 year ago* (last edited 1 year ago) (4 children)

I know it's a meme, but the idea that transformers models 'remember' anything is a common misconception.

They have zero memory. When you submit a prompt, it feeds your entire chat history as one big prompt and... forgets it immediately, with no impact on the model itself. It's like its frozen in time, and copied, unfrozen, and thrown away every time it answers.

[–] Zorque@lemmy.world 4 points 1 year ago

This has been a joke since before anything resembling the modern "AI" boom. Basically since murderous future AI was a think in popular media, at least since Terminator if not earlier. People would joke about treating their appliances kindly so that "Skynet" won't kill them in the future.

[–] kn33@lemmy.world 2 points 1 year ago (1 children)

Am I misunderstanding your comment or does it completely ignore context windows? Not that context windows are long-term, but it's not zero.

[–] brucethemoose@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

The context window is indeed the LLM's memory.

...But its also muddy.

Many LLMs get 'dumber' and less attentive as their context windows grow, and OpenAI's models just happen to be one of these. It's awful close to the full 128K, even with the full GPT-4. Mistral models are also really bad at long context understanding while, conversely, I find that Google Gemini and Qwen 2.5 are really good close to their limits.

There are attempts to try and measure this performance objectively, like: https://github.com/NVIDIA/RULER

[–] samunder@lemmynsfw.com -3 points 1 year ago (1 children)

Yeah, yeah, let's see how Google will achieve more memory with their new Titan architecture

[–] brucethemoose@lemmy.world 5 points 1 year ago

It's still ephemeral, chats don't change the underlying language model, but yes it's interesting.

[–] Lemminary@lemmy.world 4 points 1 year ago

My reasons are two-fold: Some research indicates that you get better performance out of it if you're nice because it imitates people, and I also like being nice. wearenotthesame.jpg

[–] usrtrv@sh.itjust.works 4 points 1 year ago (1 children)

You might need to go one step further and help it become sentient if Roko's basilisk is to be believed. https://en.m.wikipedia.org/wiki/Roko%27s_basilisk

[–] Coldgoron@lemm.ee 2 points 1 year ago* (last edited 1 year ago)

I’ve been getting into fuck you loops with siri and chatgpt, for the opposite reasons.

[–] stinerman@midwest.social 1 points 1 year ago

We've been explicitly told at work to be courteous when asking Copilot for help because it gives better answers that way.

[–] Beetschnapps@lemmy.world 0 points 1 year ago* (last edited 1 year ago)

Cute. Sucking the model’s dick doesn’t mean you got this… they won’t kill you. You’re doing that yourself…

But smile away…