this post was submitted on 27 Apr 2026
1260 points (99.0% liked)
Programmer Humor
31173 readers
2342 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Anthropic's AI system was used to target the school in Minab, killing 120 students. https://www.washingtonpost.com/national-security/2026/03/11/us-strike-iran-elementary-school-ai-target-list/
The company is suing to be able to supply the US military again.
Maven is doing the targeting, not Claude.
https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying
Also the WaPo is now Bezos' news.
And Maven used Anthropic's AI https://en.wikipedia.org/wiki/Project_Maven
Yes, but not for targeting, as explained in the article I linked.
Anthropic's AI did data analysis for Project Maven, which was a system that used data analyzed by various sources to target a school. So the AI is part of the "kill-chain" no?
I suggest you read the article.
Yes. I never said it was an LLM. It was probably some custom AI system made by Anthropic.
Are we agreed that some Anthropic AI system (not necessarily the Claude LLM) was in the kill chain? That was what I was trying to say from the beginning.
Well you'll need to source your claim. The wiki article you linked only mention Claude.
The Anthropic contract is also quite recent compared to Maven creation.
My sources are already linked in my two earlier comments. What about them are you disputing?
I don't see how the recency matters. That Anthropic was not involved in bombings conducted by the US military in previous years does not absolve them of their involvement in the bombing of the school in Minab.
They only mention Claude, where is the source that "some custom AI system made by Anthropic", not a LLM, "was in the kill chain"?
I mean, I get that you want to tie Anthropic to this, I don't like them either but we should stay factual and avoid filling the gaps with some "probably". It's also counterproductive as Maven and Palantir are huge menaces and this shift the blame away from them.
You're the one saying it's not the Claude LLM doing the targeting. Your source is that Guardian article you linked.
I don't care if it's an LLM or some other thing made by Anthropic. Anthropic is involved in this. All the sources in this conversation so far indicate so. Or are you trying to argue that they are just supplying Palantir and Project Maven for wholly innocent purposes?
Pointing out Anthropic's involvement in the killing of 120 students does not in any way shift blame away from Palantir and Maven. Of course there are information gaps regarding how exactly the AI was involved. No remotely competent military would make all these information public.
I'm just saying that, as far as we know, the Anthropic contract is about Claude and the targeting is not made by a LLM.
That's one way to spin it.
My take on it is that it was used inappropriately, and when the fascists wanted it tailored for that abhorrent use, Anthropic refused, and in retaliation the fascists banned it for ANY use, so now Anthropic is suing to allow the sane to continue using it for it's appropriate uses.
What sane use? And how does this company plan to prevent the fascists from using it to kill another 120 children?
The only not-evil move is to not sell dual-use goods to fascists in the first place.
You seriously can't think of any sane use? How about categorizing large amounts of data. Brainstorming strategies for problem solving. Converting pseudo code to actual code. Troubleshooting error messages. I mean, there are dozens upon dozens of valid uses that harm no one.
How does Bic plan to prevent murderers from stabbing people with their pens? How does Toyota plan to stop drivers from committing vehicular manslaughter? How does Hewlett-Packard plan on preventing fascists from saving manifestos? How does Apple plan on preventing sexual criminals from taking pictures of their victims?
What's that? Companies don't need to accomplish impossible tasks to have a viable product? I guess it's only AI that has insurmountable demands placed on them by reactionaries.
The only not-evil move is to sit in a cave using sticks, once the trees figure out how to keep cavemen from beating their children with them.
I wasn't clear. What I meant was: what sane things could a fascist military use AI for?
"Reactionary" lmao. My friend, I use LLMs all the time. Just not the proprietary ones from companies that are in bed with fascists.