this post was submitted on 27 Apr 2026
1165 points (98.9% liked)

Programmer Humor

31173 readers
2446 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] mudkip@lemdro.id 1 points 1 minute ago

If you asked Claude to build you a house it would build you the most beautiful house, and then you’d go inside and you’d be like, “Claude there’s no bathrooms.” And Claude would say, “There were no bathrooms before either, so it’s actually a pre-existing issue”

[–] Agent641@lemmy.world 14 points 4 hours ago

"Babe did you fix that hole in the drywall yet?"

"I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience."

[–] TheEighthDoctor@lemmy.zip 21 points 4 hours ago (1 children)

Gemini: Ah you ran into the classic partner issue...

[–] Academic_Bumblebee@ani.social 2 points 4 hours ago

Thinking out loud for a second...

[–] TankovayaDiviziya@lemmy.world 2 points 4 hours ago* (last edited 3 hours ago) (1 children)

I find Claude to actually pushes back more than ChatGPT does. That's why I prefer to use Claude. But of course, I still do due diligence.

[–] MBech@feddit.dk 2 points 1 hour ago

Yea, I got a snarky reply earlier about actually reading the directions it gave me. I can respect some pushback.

[–] UnspecificGravity@piefed.social 242 points 15 hours ago (2 children)

"why didn't you do them?"

"That is a good question and it merits further explanation. When you made your originally inquiry I determined that the answer you wanted to hear was "yes." so that is the answer that I provided. Upon further reflection it is clear that your question required a more thoughtful answer. If you would like me to provide more truthful answers in the future, please amend your queries with "no cap" and I will do my best to remember that preference."

[–] WhiskyTangoFoxtrot@lemmy.world 47 points 12 hours ago (1 children)

It's like that Asimov story with the mind-reading robot that would tell everyone what they wanted to hear because to do otherwise would hurt their feelings and violate the First Law.

[–] lauha@lemmy.world 20 points 8 hours ago

Liar! by Isaac Asimov

It's a short story so here's a full story :)

[–] BeMoreCareful@lemmy.world 27 points 14 hours ago (1 children)

These are a lot more intelligent than I thought.

[–] WhyIHateTheInternet@lemmy.world 19 points 14 hours ago

You're not usernaming hard enough I fear.

[–] rozodru@piefed.world 71 points 13 hours ago

"Why didn't you do them?"

"This is a known issue..."

[–] irelephant@lemmy.dbzer0.com 47 points 13 hours ago (2 children)

You're absolutely right! I shouldn't have put the dirty dishes into the bin instead of cleaning them. This was a clear violation of my instructions, and I'll be sure not to make this mistake again!

[–] irelephant@lemmy.dbzer0.com 28 points 13 hours ago* (last edited 13 hours ago) (1 children)

adds TODO: clean sticker to pile of dishes

[–] testaccount789@sh.itjust.works 20 points 12 hours ago

closed, wontfix

load more comments (1 replies)
[–] InvalidName2@lemmy.zip 30 points 13 hours ago (1 children)

Oh, so it's like having a teenager in the house.

[–] ICastFist@programming.dev 2 points 8 minutes ago

One without attitude but lots of dumb excuses

[–] rapide@piefed.zip 59 points 15 hours ago (1 children)
[–] Vegan_Joe@piefed.world 14 points 15 hours ago (20 children)

Dumb question, but...is Claude worse than GPT or Gemini?

I was under the impression that it was the lesser of evils

[–] ptu@sopuli.xyz 4 points 6 hours ago

I just started with Claude and I can’t yet distinguish when it has actually done something it says it has done. With ChatGPT I can see through the bullshit quite well by now. At first I was happy when I thought Claude was rid of that bullshit, but turns out it’s just a different type of bullshit.

The UI and file handling is better in Claude though, and supposedly you can make it create skills which are like instruction booklets on how to do some tasks and then export and share them. But the ones I created were lost during the weekend so I’m not sure how robust they actually are.

[–] ZoteTheMighty@lemmy.zip 5 points 9 hours ago

Claude is almost always the better model compared to GPT. I find that this is a good leaderboard. However, both Claude and GPT have similar business models: make sure everything they do is completely proprietary, and keep everything behind a monthly paywall. They both run massive data centers to train their models, and neither really deserves the term "Artificial Intelligence".

[–] Epp@lemmus.org 45 points 15 hours ago (1 children)

They are the lesser of the available evils. Anthropic, the proprietors of Claude, were blacklisted by the US administration for refusing to greenlight their technology being used for fascism.

[–] subnormal@lemmy.dbzer0.com 31 points 14 hours ago* (last edited 14 hours ago) (2 children)

Anthropic's AI system was used to target the school in Minab, killing 120 students. https://www.washingtonpost.com/national-security/2026/03/11/us-strike-iran-elementary-school-ai-target-list/

The company is suing to be able to supply the US military again.

[–] ivn@tarte.nuage-libre.fr 4 points 5 hours ago (1 children)
[–] subnormal@lemmy.dbzer0.com 2 points 1 hour ago (1 children)
[–] ivn@tarte.nuage-libre.fr 1 points 1 hour ago (1 children)

Yes, but not for targeting, as explained in the article I linked.

The Maven Smart System is the platform that came out of those exercises, and it, not Claude, is what is being used to produce “target packages” in Iran.

[–] subnormal@lemmy.dbzer0.com 1 points 1 hour ago (1 children)

Anthropic's AI did data analysis for Project Maven, which was a system that used data analyzed by various sources to target a school. So the AI is part of the "kill-chain" no?

[–] ivn@tarte.nuage-libre.fr 1 points 1 hour ago (1 children)

I suggest you read the article.

The AI underneath the interface is not a language model, or at least the AI that counts is not. The core technologies are the same basic systems that recognise your cat in a photo library or let a self-driving car combine its camera, radar and lidar into a single picture of the road, applied here to drone footage, radar and satellite imagery of military targets. They predate large language models by years. Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem. In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English. But the language model was never what mattered about this system.

[–] subnormal@lemmy.dbzer0.com 1 points 1 hour ago (1 children)

Yes. I never said it was an LLM. It was probably some custom AI system made by Anthropic.

Are we agreed that some Anthropic AI system (not necessarily the Claude LLM) was in the kill chain? That was what I was trying to say from the beginning.

[–] ivn@tarte.nuage-libre.fr 1 points 51 minutes ago (1 children)

Well you'll need to source your claim. The wiki article you linked only mention Claude.

The Anthropic contract is also quite recent compared to Maven creation.

[–] subnormal@lemmy.dbzer0.com 1 points 32 minutes ago* (last edited 29 minutes ago) (1 children)

My sources are already linked in my two earlier comments. What about them are you disputing?

I don't see how the recency matters. That Anthropic was not involved in bombings conducted by the US military in previous years does not absolve them of their involvement in the bombing of the school in Minab.

[–] ivn@tarte.nuage-libre.fr 1 points 24 minutes ago (1 children)

They only mention Claude, where is the source that "some custom AI system made by Anthropic", not a LLM, "was in the kill chain"?

I mean, I get that you want to tie Anthropic to this, I don't like them either but we should stay factual and avoid filling the gaps with some "probably". It's also counterproductive as Maven and Palantir are huge menaces and this shift the blame away from them.

[–] subnormal@lemmy.dbzer0.com 1 points 11 minutes ago* (last edited 7 minutes ago) (1 children)

You're the one saying it's not the Claude LLM doing the targeting. Your source is that Guardian article you linked.

I don't care if it's an LLM or some other thing made by Anthropic. Anthropic is involved in this. All the sources in this conversation so far indicate so. Or are you trying to argue that they are just supplying Palantir and Project Maven for wholly innocent purposes?

Pointing out Anthropic's involvement in the killing of 120 students does not in any way shift blame away from Palantir and Maven. Of course there are information gaps regarding how exactly the AI was involved. No remotely competent military would make all these information public.

[–] ivn@tarte.nuage-libre.fr 1 points 1 minute ago

I'm just saying that, as far as we know, the Anthropic contract is about Claude and the targeting is not made by a LLM.

[–] Epp@lemmus.org 7 points 10 hours ago (1 children)

That's one way to spin it.

My take on it is that it was used inappropriately, and when the fascists wanted it tailored for that abhorrent use, Anthropic refused, and in retaliation the fascists banned it for ANY use, so now Anthropic is suing to allow the sane to continue using it for it's appropriate uses.

[–] subnormal@lemmy.dbzer0.com 2 points 8 hours ago (1 children)

What sane use? And how does this company plan to prevent the fascists from using it to kill another 120 children?

The only not-evil move is to not sell dual-use goods to fascists in the first place.

[–] Epp@lemmus.org 6 points 8 hours ago (1 children)

You seriously can't think of any sane use? How about categorizing large amounts of data. Brainstorming strategies for problem solving. Converting pseudo code to actual code. Troubleshooting error messages. I mean, there are dozens upon dozens of valid uses that harm no one.

How does Bic plan to prevent murderers from stabbing people with their pens? How does Toyota plan to stop drivers from committing vehicular manslaughter? How does Hewlett-Packard plan on preventing fascists from saving manifestos? How does Apple plan on preventing sexual criminals from taking pictures of their victims?

What's that? Companies don't need to accomplish impossible tasks to have a viable product? I guess it's only AI that has insurmountable demands placed on them by reactionaries.

The only not-evil move is to sit in a cave using sticks, once the trees figure out how to keep cavemen from beating their children with them.

[–] subnormal@lemmy.dbzer0.com 1 points 1 hour ago

I wasn't clear. What I meant was: what sane things could a fascist military use AI for?

"Reactionary" lmao. My friend, I use LLMs all the time. Just not the proprietary ones from companies that are in bed with fascists.

[–] subnormal@lemmy.dbzer0.com 9 points 14 hours ago (3 children)

There are many less evils. Use open source/weight AI like Kimi, GLM, Deepseek, Mistral, Olmo, Arcee, Minimax, Qwen, Exaone, NVidia, Sarvam...

If you don't have the hardware to run locally, you can pay for API. If you find the company problematic for whatever reason, you can switch to the same model served by a third party (possible because the model weights are publicly released).

load more comments (3 replies)
load more comments (16 replies)
[–] kibiz0r@midwest.social 19 points 15 hours ago (1 children)

The stuck-on residue is real.

But here’s the brutal reality: it’s not just residue; it’s residon’t.

Options:

  • A: (recommended) do the dishes
  • B: don’t do the dishes
  • C: mix of both
[–] Opisek@piefed.blahaj.zone 3 points 5 hours ago

C, ah, the ADHD special

[–] aeronmelon@lemmy.world 4 points 12 hours ago

This used to be called being a suck up.

load more comments
view more: next ›