this post was submitted on 01 Mar 2026
132 points (100.0% liked)

TechTakes

2467 readers
201 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

h/t to Ed Zitron: https://bsky.app/profile/edzitron.com/post/3mfxqjqoias2q

alt textWSJ PATRICK SISON/ASSOCIATED PRESS Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools. Commands around the world, including U.S. Central Command in the Middle East, use Anthropic's Claude AI tool, people familiar with the matter confirmed. Centcom declined to comment about specific systems being used in its ongoing operation against Iran. The command uses the tool for intelligence assessments, target identification and simulating battle scenarios even as tension between the company and Pentagon ratcheted up, the people said, highlighting how embedded the AI tools are in military operations. The administration and Anthropic have been feuding for months over how its AI models can be used by the Pentagon. Trump on Friday ordered agencies to stop working with the company and the Defense Department designated it a security threat and risk to its supply chain.

top 16 comments
sorted by: hot top controversial new old
[–] etherphon@piefed.world 6 points 8 hours ago

I don't even trust these things with basic medical questions and they are using it for military operations? People are just so god damn stupid. I suppose the upside is (for them) anything at all goes wrong and they just blame the computers, and we all know how much the US military loves unaccountability.

[–] fodor@lemmy.zip 23 points 19 hours ago

So after bombing the school they'll blame it on AI instead of charge the generals for fucking war crimes. Yeah, I think we predicted this one.

[–] scruiser@awful.systems 65 points 23 hours ago* (last edited 23 hours ago) (3 children)

This really is the dumbest timeline.

simulating battle scenarios

Regurgitating reddit armchair generals from /r/noncredibledefense

[–] wizardbeard@lemmy.dbzer0.com 11 points 13 hours ago (1 children)

If they were talking about some complex simulation engine utilizing ML and research carefully collated and constructed, this would be at least interesting.

But they aren't. We shoved the entirety of text produced by the human race in a big pot, mixed it up, and extrude it by most likely connections. It is predictive of words and phrases, not of human behavior, physics of munitions, or anything actually useful for modeling warfare.

They're fucking generating war fanfic and using it to make stratgic decisions. Just hire Clancy, Crichton, Card, and/or whoever's ghost writing for them now. It'd be cheaper.

[–] wonderingwanderer@sopuli.xyz 4 points 6 hours ago

Yeah, the fact that the nation's highest military command no longer understands the difference between machine learning and an LLM is gravely concerning...

They fired all the professionals in 2025. All that's left are the sycophants.

[–] Soyweiser@awful.systems 7 points 14 hours ago (1 children)

Wonder how hard it was to filter out the HOI4 results.

[–] scruiser@awful.systems 5 points 13 hours ago

Bold of you to assume they would bother filtering them out.

[–] frank@sopuli.xyz 35 points 23 hours ago (1 children)

Maybe it has a bunch of leaked files from the WarThunder forums as well!

Simulating battle scenarios is absolutely hilarious, adults standing around the magic 8 ball

[–] wonderingwanderer@sopuli.xyz 4 points 6 hours ago* (last edited 4 hours ago) (2 children)

Simulated battle scenarios are a common component of wargaming. That doesn't mean an LLM is the right tool for it, but it's been a thing for a long time.

The bigger concern here is using it for intelligence assessments and target acquisition, because LLMs hallucinate a lot.

[–] frank@sopuli.xyz 2 points 4 hours ago (1 children)

That's fair, I only meant it to poke fun at the LLM simulating battle scenarios, I know it's useful in general to simulate and wargame

[–] wonderingwanderer@sopuli.xyz 3 points 3 hours ago

Yeah, an LLM is not designed for those kinds of simulations. It can write you a choose-your-own adventure story, but it can't realistically model dynamic kinetic operations with any degree of applicability.

[–] fullsquare@awful.systems 4 points 6 hours ago

as a side effect, it's a phenomenal accountability sink. people almost forget that usaf can make entirely human-made fuckups https://en.wikipedia.org/wiki/Amiriyah_shelter_bombing

[–] lurker@awful.systems 24 points 23 hours ago* (last edited 23 hours ago) (1 children)

Incredibly ballsy move to keep using their tech after you literally branded them a supply chain threat and implied you would take legal action against them, but that’s this administration for ya

(they did say there would be a six-month phase out period after which if Anthropic still didn’t comply, they’d force them to, but still)

[–] Tar_alcaran@sh.itjust.works 23 points 22 hours ago (2 children)

They want anthropic to drop the guardrails because they're literally using Claude for their strategic planning...

The US went to war with Iran because Pete Kegsbreath asked an LLM. Jesus fucking Christ.

[–] Soyweiser@awful.systems 5 points 14 hours ago

Military planners in Taiwan, SK and Japan are prob shitting bricks right now.

[–] lurker@awful.systems 6 points 20 hours ago

this truly is the dumbest timeline huh