this post was submitted on 27 Apr 2026
434 points (98.2% liked)

Programmer Humor

31201 readers
990 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] FellowEnt@sh.itjust.works 14 points 1 day ago (1 children)

Seems like user error, I'm no programmer but even I lnow you don't give an agent access to critical things, and Claude is very insistent at asking for permission at every step.

[–] pinball_wizard@lemmy.zip 8 points 1 day ago* (last edited 1 day ago)

Seems like user error, I'm no programmer but even I lnow you don't give an agent access to critical things

Yes.

But these models have (largely correctly) learned from Stack Overflow that, on average, every problem is due to not enough permissions.

Someone fully relying on an agentic AI model is essentially destined to give it full control (or close enough), eventually.

At some point, a tool like these LLMs either needs to not be marketed to that user, or needs stupid levels of safety warnings.

My money is on neither solution happening, and this kind of result continuing for the foreseeable future - until the rest of us doing cleanup instigate Dune's Butlerian Jihad to stop the damage and save our own sanity.