this post was submitted on 27 Apr 2026
1375 points (99.0% liked)

Programmer Humor

31173 readers
2419 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Vegan_Joe@piefed.world 16 points 20 hours ago (9 children)

Dumb question, but...is Claude worse than GPT or Gemini?

I was under the impression that it was the lesser of evils

[–] ptu@sopuli.xyz 5 points 11 hours ago

I just started with Claude and I can’t yet distinguish when it has actually done something it says it has done. With ChatGPT I can see through the bullshit quite well by now. At first I was happy when I thought Claude was rid of that bullshit, but turns out it’s just a different type of bullshit.

The UI and file handling is better in Claude though, and supposedly you can make it create skills which are like instruction booklets on how to do some tasks and then export and share them. But the ones I created were lost during the weekend so I’m not sure how robust they actually are.

[–] Epp@lemmus.org 50 points 19 hours ago (1 children)

They are the lesser of the available evils. Anthropic, the proprietors of Claude, were blacklisted by the US administration for refusing to greenlight their technology being used for fascism.

[–] subnormal@lemmy.dbzer0.com 34 points 18 hours ago* (last edited 18 hours ago) (2 children)

Anthropic's AI system was used to target the school in Minab, killing 120 students. https://www.washingtonpost.com/national-security/2026/03/11/us-strike-iran-elementary-school-ai-target-list/

The company is suing to be able to supply the US military again.

[–] ivn@tarte.nuage-libre.fr 6 points 9 hours ago (1 children)
[–] subnormal@lemmy.dbzer0.com 2 points 6 hours ago (1 children)
[–] ivn@tarte.nuage-libre.fr 2 points 6 hours ago (1 children)

Yes, but not for targeting, as explained in the article I linked.

The Maven Smart System is the platform that came out of those exercises, and it, not Claude, is what is being used to produce “target packages” in Iran.

[–] subnormal@lemmy.dbzer0.com 0 points 6 hours ago (1 children)

Anthropic's AI did data analysis for Project Maven, which was a system that used data analyzed by various sources to target a school. So the AI is part of the "kill-chain" no?

[–] ivn@tarte.nuage-libre.fr 2 points 5 hours ago (1 children)

I suggest you read the article.

The AI underneath the interface is not a language model, or at least the AI that counts is not. The core technologies are the same basic systems that recognise your cat in a photo library or let a self-driving car combine its camera, radar and lidar into a single picture of the road, applied here to drone footage, radar and satellite imagery of military targets. They predate large language models by years. Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem. In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English. But the language model was never what mattered about this system.

[–] subnormal@lemmy.dbzer0.com 0 points 5 hours ago (1 children)

Yes. I never said it was an LLM. It was probably some custom AI system made by Anthropic.

Are we agreed that some Anthropic AI system (not necessarily the Claude LLM) was in the kill chain? That was what I was trying to say from the beginning.

[–] ivn@tarte.nuage-libre.fr 2 points 5 hours ago (1 children)

Well you'll need to source your claim. The wiki article you linked only mention Claude.

The Anthropic contract is also quite recent compared to Maven creation.

[–] subnormal@lemmy.dbzer0.com 0 points 5 hours ago* (last edited 5 hours ago) (1 children)

My sources are already linked in my two earlier comments. What about them are you disputing?

I don't see how the recency matters. That Anthropic was not involved in bombings conducted by the US military in previous years does not absolve them of their involvement in the bombing of the school in Minab.

[–] ivn@tarte.nuage-libre.fr 2 points 4 hours ago (1 children)

They only mention Claude, where is the source that "some custom AI system made by Anthropic", not a LLM, "was in the kill chain"?

I mean, I get that you want to tie Anthropic to this, I don't like them either but we should stay factual and avoid filling the gaps with some "probably". It's also counterproductive as Maven and Palantir are huge menaces and this shift the blame away from them.

[–] subnormal@lemmy.dbzer0.com 0 points 4 hours ago* (last edited 4 hours ago) (1 children)

You're the one saying it's not the Claude LLM doing the targeting. Your source is that Guardian article you linked.

I don't care if it's an LLM or some other thing made by Anthropic. Anthropic is involved in this. All the sources in this conversation so far indicate so. Or are you trying to argue that they are just supplying Palantir and Project Maven for wholly innocent purposes?

Pointing out Anthropic's involvement in the killing of 120 students does not in any way shift blame away from Palantir and Maven. Of course there are information gaps regarding how exactly the AI was involved. No remotely competent military would make all these information public.

[–] ivn@tarte.nuage-libre.fr 2 points 4 hours ago (1 children)

I'm just saying that, as far as we know, the Anthropic contract is about Claude and the targeting is not made by a LLM.

[–] subnormal@lemmy.dbzer0.com 1 points 2 hours ago (1 children)

Okay fair enough.

Since Maven's entire business is data analysis and targeting, can we agree that if the AI is not being used for targeting, it is being used to analyze data? And those analyzed data get fed into the targeting system, so the AI is part of the kill chain?

What kind of data is being analyzed by AI? How much of it feed into the targeting system? I concede that I don't know and have no source. The US military would have to be really stupid to make these info public.

[–] ivn@tarte.nuage-libre.fr 1 points 2 hours ago (1 children)

There is nothing that indicates that Anthropic's AI is used to analyze data, I'm not saying it's not, just that we don't know. I'm going to quote a smaller section of a quote I made earlier of the same Guardian article:

In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English.

But the term AI is an issue here, there are multiple, of different kind, made by different companies. There is AI used for targeting, no doubt, but it's not Claude, it's Maven and some other subcomponents. The fact that Anthropic joined the project late, after it was already operational, is a good hint that they do not bring a core feature, but that's only speculation.

[–] subnormal@lemmy.dbzer0.com 1 points 17 minutes ago

Okay. I guess we at least agree on the facts.

You are giving the company a huge amount of benefit of doubt and I don't understand why. May I ask: If it was Elon Musk's xAI/Grok rather than Anthropic, would your thoughts on this change? How about if it was Yandex making the AI and the school was in Ukraine?

[–] Epp@lemmus.org 8 points 14 hours ago (1 children)

That's one way to spin it.

My take on it is that it was used inappropriately, and when the fascists wanted it tailored for that abhorrent use, Anthropic refused, and in retaliation the fascists banned it for ANY use, so now Anthropic is suing to allow the sane to continue using it for it's appropriate uses.

[–] subnormal@lemmy.dbzer0.com 2 points 13 hours ago (1 children)

What sane use? And how does this company plan to prevent the fascists from using it to kill another 120 children?

The only not-evil move is to not sell dual-use goods to fascists in the first place.

[–] Epp@lemmus.org 6 points 12 hours ago (1 children)

You seriously can't think of any sane use? How about categorizing large amounts of data. Brainstorming strategies for problem solving. Converting pseudo code to actual code. Troubleshooting error messages. I mean, there are dozens upon dozens of valid uses that harm no one.

How does Bic plan to prevent murderers from stabbing people with their pens? How does Toyota plan to stop drivers from committing vehicular manslaughter? How does Hewlett-Packard plan on preventing fascists from saving manifestos? How does Apple plan on preventing sexual criminals from taking pictures of their victims?

What's that? Companies don't need to accomplish impossible tasks to have a viable product? I guess it's only AI that has insurmountable demands placed on them by reactionaries.

The only not-evil move is to sit in a cave using sticks, once the trees figure out how to keep cavemen from beating their children with them.

[–] subnormal@lemmy.dbzer0.com 0 points 6 hours ago

I wasn't clear. What I meant was: what sane things could a fascist military use AI for?

"Reactionary" lmao. My friend, I use LLMs all the time. Just not the proprietary ones from companies that are in bed with fascists.

[–] ZoteTheMighty@lemmy.zip 5 points 13 hours ago

Claude is almost always the better model compared to GPT. I find that this is a good leaderboard. However, both Claude and GPT have similar business models: make sure everything they do is completely proprietary, and keep everything behind a monthly paywall. They both run massive data centers to train their models, and neither really deserves the term "Artificial Intelligence".

[–] subnormal@lemmy.dbzer0.com 11 points 18 hours ago (2 children)

There are many less evils. Use open source/weight AI like Kimi, GLM, Deepseek, Mistral, Olmo, Arcee, Minimax, Qwen, Exaone, NVidia, Sarvam...

If you don't have the hardware to run locally, you can pay for API. If you find the company problematic for whatever reason, you can switch to the same model served by a third party (possible because the model weights are publicly released).

[–] TachyonTele@piefed.social 5 points 18 hours ago (1 children)

Other than wanting a verbose answer to a question, what is it for?

[–] subnormal@lemmy.dbzer0.com 7 points 14 hours ago

For me I just use it to get verbose answers to questions.

I use open-weight LLM over search engines when I can, because Google/Bing/Yandex are complete proprietary black boxes run by corporations of questionable morality.

[–] Grail@multiverse.soulism.net 3 points 16 hours ago

Or you could just not use LLMs. Fuck AI.

[–] Dojan@pawb.social 10 points 19 hours ago

In what manner? Capabilities, or belonging to an evil corporation that happily steals data and works to undermine democracy?

[–] IndustryStandard@lemmy.world 1 points 17 hours ago* (last edited 17 hours ago)

It is better than GPT and Gemini but not great. Claude some US military contracts. At least to public knowledge.

https://www.cnbc.com/2026/03/04/pentagon-blacklist-anthropic-defense-tech-claude.html

Defense Secretary Pete Hegseth declared on X that any contractor or supplier doing business with the U.S. military is barred from commercial activity with Anthropic.

The announcement came after Anthropic executives refused to comply with the government’s demands over its model use. They wanted assurances that their AI would not be tapped for fully autonomous weapons or mass domestic surveillance of America.

Anthropic’s models are still being used to support the U.S. military operations in Iran, even after the announcement from the Trump administration, as CNBC previously reported.

[–] rozodru@piefed.world -1 points 17 hours ago (1 children)

less of the evils. That being said as far as quality goes Claude has taken a very noticeable decline in quality within the past several months. used to be half decent but now 8 to 9 times out of 10 you're going to get an hallucination for a solution. Anthropic has REALLY dropped the ball with Claude and Claude code. absolute garbage LLM now.

[–] some_designer_dude@lemmy.world 5 points 14 hours ago

This could be user error, to some degree.

[–] hoch@lemmy.world -2 points 19 hours ago (2 children)

No. Many people here just hate LLMs in general and will use every opportunity to complain about it.

[–] Appoxo@lemmy.dbzer0.com 17 points 18 hours ago (1 children)

Personally I dislike how helpless and useless it makes my fellow colleagues at research.
No thought given, use the first web result (and in most cases, just accept the AI output as search gospel).

In my case it's only used for very obscure issue descriptions my google-fu isnt sufficient enough for or correlating weird bugs with each other.

[–] Epp@lemmus.org -2 points 14 hours ago (2 children)

That's the same reason I hate bicycles! They make travel too easy for everyone. Need to go somewhere? All my associates immediately reach for their bikes, and think of it as the default mode of travel. Heaven forbid they put actual effort into traveling by walking the whole way, or better yet, crawling so that they can include their arms in the endeavor like nature intended.

I only use a bicycle when I'm going to a very obscure location and would have to do my crawling on dirt trails otherwise.

[–] dreamkeeper@literature.cafe 4 points 5 hours ago* (last edited 5 hours ago)

This is disingenuous. Does your bike regularly take you to the wrong destination?

[–] Opisek@piefed.blahaj.zone 2 points 10 hours ago* (last edited 10 hours ago) (1 children)

Except you should be comparing it to motorized wheelchairs. Suddenly all your associates forget how to walk, WALL-E style.

[–] Epp@lemmus.org 2 points 8 hours ago

The events of Wall-E happened over generations. If your associates have forgotten how to walk already then they never knew how to begin with and were just faking it until something came along to save them. So at least now with a wheelchair as a crutch they can actually contribute rather than just pretending to be productive but getting nothing done in reality.

[–] Epp@lemmus.org 13 points 19 hours ago (1 children)

I'd say 99.9% of people. You're actually the first other person I've seen who doesn't!