this post was submitted on 23 Apr 2026
252 points (99.2% liked)

Fuck AI

6830 readers
755 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] UnspecificGravity@piefed.social 1 points 23 hours ago (1 children)

The second someone suggests that the agent learned from an interaction is when you know they are full of shit because that's not even how LLMs work.

[–] jj4211@lemmy.world 1 points 3 hours ago

Yeah, had someone in my work say that. They gave it a 'college homework assignment' type problem to see if it worked, and it mostly worked but made a mistake. The next day, as an entirely separate chat session he repeated the experiment and it happened not to make the mistake and he assumed it learned from his previous day conversation. That he was the first ever person to post a very obvious intro to programming problem to the engine and taught it this.

But in this particular scenario, you didn't have to attribute learning during the interaction, it's just that the original human way looked close but wrong to a more usual pattern, and so the model wants to make it the usual pattern. Then with the usual pattern, suddenly it resembles a common mistake in context, and it wanted to put it back. So it just oscillated between 'looks like they meant to do something more usual' and 'looks like they made a mistake by applying a common pattern incorrectly'.

Of course, telling that their human generated initial code explicitly avoided the pitfall and the human still shrugged and hit 'accept' when the GenAI said to modify this code.