this post was submitted on 27 Apr 2026
1223 points (98.6% liked)

Technology

84256 readers
3245 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] SirEDCaLot@lemmy.today 11 points 2 days ago

There's stupid from top to bottom here.

The company is stupid for allowing an AI full root access to their entire setup.

The provider is stupid for only generating full-access API keys. They're even stupider for storing backups with a volume, so deleting the volume (zero confirmation via API key) also insta-deletes the backups. And they're stupidest for encouraging users to plug AIs into this full-trust mess.

And the company is absolute stupidest for having no backups other than the provider's builtin versioning.

[–] FosterMolasses@leminal.space 9 points 2 days ago
[–] IronKrill@lemmy.ca 52 points 3 days ago (1 children)

The AI agent was set to complete a routine task in the PocketOS staging environment. However, it came up against a barrier “and decided — entirely on its own initiative — to 'fix' the problem by deleting a Railway volume,” writes Crane, as he starts to describe the difficult-to-believe series of unfortunate events.

Quite easy-to-believe, really.

These multiple safeguards toppling in rapid succession

Multiple safeguards? Really? Multiple paragraph prompts are not multiple safeguards... it's half a safeguard at best. Applying limits on what the AI can do is a safeguard.

[–] Zizzy@lemmy.blahaj.zone 38 points 3 days ago (1 children)

These people think giving the genai a prompt is coding. They dont understand the difference between actually coding in limits and just writing "pretty please dont delete everything"

[–] aesthelete@lemmy.world 22 points 3 days ago (2 children)

I'm shocked and appalled that my addition of "do NOT make any mistakes!" didn't singlehandedly make the word guessing technology underneath perfect.

Lol this is just like saying "I do declare bankruptcy"

load more comments (1 replies)
[–] stoy@lemmy.zip 342 points 4 days ago (1 children)

Fucking lol.

Well deserved.

load more comments (1 replies)
[–] timwa@lemmy.snowgoons.ro 282 points 4 days ago (20 children)

This isn't an AI story, it's a "completely fucking idiotic sysadmins exist" story.

Treat an AI like the idiot intern without any references you just hired. Gave the idiot intern permission to delete your production database? That's entirely on you, zero sympathy. (Actually, give any developer that power? You get what you deserve.)

[–] IchNichtenLichten@lemmy.wtf 129 points 4 days ago (1 children)

It could be a moronic sysadmin, it could just as easily be a moronic exec pushing staff to implement this crap right now and damn the consequences.

load more comments (1 replies)
[–] jacksilver@lemmy.world 80 points 3 days ago (24 children)

I mean that's kinda the whole point.

Companies are looking at AI to replace people. Either it's ready or it's not.

If you need to treat it like it's an intern, then it's not worth the expense. Anyone hiring interns to be productive doesn't understand why you hire an intern.

load more comments (24 replies)
load more comments (18 replies)
[–] Bluewing@lemmy.world 5 points 2 days ago (1 children)

To be fair, someone did have the malice aforeskin to have an AI separated backup. They did get things restored from a snapshot. It just took a couple of days to do it.

But the loss of reputation and revenue is gonna sting for a good while.

[–] FosterMolasses@leminal.space 10 points 2 days ago (1 children)

the malice aforeskin

The hwat

[–] MyVeryRealName@lemmy.world 2 points 2 days ago

Knowledge beforehand ig?

[–] PerogiBoi@lemmy.ca 35 points 3 days ago

That's great to hear.

[–] subnormal@lemmy.dbzer0.com 27 points 3 days ago (2 children)

Reminder that Anthropic's AI system was used in targeting the school in Minab, killing 120 students. https://www.washingtonpost.com/national-security/2026/03/11/us-strike-iran-elementary-school-ai-target-list/

The company is suing to be able to supply the US military again. It is in bed with the fascists.

[–] greyscale@lemmy.grey.ooo 1 points 8 hours ago

Can you believe the ghouls who willingly work for Palantir are currently going are we the baddies?

They were always the baddies! stop working for techno-fascists!

The only moral, correct Palantir employee is whichever one of them is dousing gasoline and setting the office on fire.

[–] Epp@lemmus.org 9 points 2 days ago* (last edited 2 days ago) (1 children)

Reminder that this is a disingenuous portrayal of events.

The reason why Anthropic can't supply the US military, or any part of the US government, is because they objected to Claude being used to choose military targets and refused to support how the fascists were using it. They are suing for the non-military branches of the government to be allowed to use the technology again after the fascists retaliated for their refusal to be in bed with fascists.

[–] 3abas@lemmy.world 2 points 1 day ago (1 children)

If you're going to fact check someone in defense of a corporation, at least check the facts your self. https://www.anthropic.com/news/where-stand-department-war

Anthropic absolutely is in bed with fascists, their objection isn't about the use of Claude to identify targets, it is explicitly about it being able to engage targets. They are totally fine with their AI identifying a school full of children as a terrorist command base as long as a human Nazi pushes the "fire" button. They're well aware the human Nazis aren't checking the AI's work and the purpose of the AI is to identify targets that lead to heavy casualties, so the human Nazis don't have to manually scan a map and cross reference it with Intel, the point is speed and they get to say AI did it when they blow up a school.

Anthropic is proud to be part of the genocide in Gaza, and wants to be part of future wars and genocides. "Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so." https://www.anthropic.com/news/statement-comments-secretary-war

And their objection is that their AI isn't reliable enough not to engage American fighters by accident. They want fully autonomous weapons: "Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk." https://www.anthropic.com/news/statement-department-of-war

You feel free to believe it's all about civilians, but they didn't make a fuss or pull out of using AI for war when it repeatedly identified children as targets, they only object to allowing Claude to also engage.

The fascists aren't upset anthropic's ai won't let them identify children as targets, they're upset it won't also execute them.

You're disingenuously portraying them as refusing to choose targets, which is exactly what they wanted from this whole drama.

They wanted confusion in the air and people to defend them, because they have their manufactured reputation to protect. They're not a moral AI company, they just want people to think (and repeat) that they are.

[–] Epp@lemmus.org 1 points 1 day ago

I stand corrected. My sincere apologies.

[–] Ghostalmedia@lemmy.world 196 points 4 days ago (4 children)

the cloud provider's API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and “wiping a volume deletes all backups.” Crane also points out that CLI tokens have blanket permissions across environments.

Well, there’s your problem.

[–] MountingSuspicion@reddthat.com 78 points 4 days ago (10 children)

I don't want to sound like a know it all here because I recently was reminded by a nice Lemmy person to actually TEST my backups, but damn. Every part of that is so dumb. I also have backups stored by a different company in addition to locally storing really important info. If your stuff is hosted and backed up by the same people, what happens if your account is randomly suspended or hacked or some other issue (like ai)?

[–] Ghostalmedia@lemmy.world 49 points 4 days ago* (last edited 3 days ago) (7 children)

If your company can be taken down by Camden the college intern, it can be taken down by Claude.

load more comments (7 replies)
load more comments (9 replies)
load more comments (3 replies)
[–] fum@lemmy.world 40 points 3 days ago (26 children)

This is absolutely hilarious. "AI" users getting what they deserve chef's kiss

load more comments (26 replies)
[–] Fmstrat@lemmy.world 89 points 3 days ago (3 children)

This guy.

The PocketOS boss puts greater blame on Railway’s architecture than on the deranged AI agent for the database’s irretrievable destruction. Briefly, the cloud provider's API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and “wiping a volume deletes all backups.” Crane also points out that CLI tokens have blanket permissions across environments.

Oh look, they have project level tokens: https://docs.railway.com/integrations/api#project-token

They chose to give it full account access, including to production. But ohhhh nooooo it's not MYYYY fault!

[–] chronicledmonocle@lemmy.world 77 points 3 days ago (13 children)

Also backups stored on the SAME VOLUME as the prod data? How fucking stupid do you have to be?

load more comments (13 replies)
load more comments (2 replies)
[–] WhatsHerBucket@lemmy.world 64 points 3 days ago (3 children)

"That's ok, it will be great in robots with lethal weapons. What could go wrong? It'll be the greatest killing machine, like you've never seen before". 🫲 🍊 🫱

load more comments (3 replies)
[–] realitista@lemmus.org 16 points 3 days ago (1 children)

Can you get an AI to code? Yes. Can you get it to stop you from running your operation in such a stupid way that it will end up destroying it? No.

[–] Bytemeister@lemmy.world 3 points 2 days ago (1 children)

Well...

You could ask an AI to provide you with a list of best practices to implement before allowing it to work in your environment in order to make sure that it doesn't accidentally delete everything you need.

[–] realitista@lemmus.org 2 points 2 days ago (1 children)

Yes but if you aren't smart enough to tell whether it's right or wrong it may not help or just make things worse. Probably the problem was they weren't smart enough to ask the question in the first place anyway.

load more comments (1 replies)
[–] SabinStargem@lemmy.today 73 points 3 days ago (15 children)

This isn't an AI problem, this is an "Don't allow anyone access your backups without following protocol." problem.

load more comments (15 replies)
[–] flandish@lemmy.world 71 points 3 days ago (3 children)

AI goes “rogue” as much as a firearm “shoots itself.” This is just 100% negligence. Not “rogue AI.”

load more comments (3 replies)
[–] GreenKnight23@lemmy.world 33 points 3 days ago
[–] ZILtoid1991@lemmy.world 25 points 3 days ago (12 children)

Always keep offline backup copies of your important data regardless of using AI slop to look over it! No, I don't care that "optical media is obsolete and e-waste!", or that "tapes are a 100 year old obsolete technology compared to cheap SSDs from TEMU!".

load more comments (12 replies)
[–] percent@infosec.pub 39 points 3 days ago (9 children)

Seems like they were operating with a pile of bad practices, then threw AI into the mix.

Neural networks are approximation algorithms. There's a reason LLMs are generally more productive with statically typed languages, TDD, etc. They need those feedback loops and guard rails, or they'll just carry on as if assuming they never make mistakes (which tends to have a compounding effect).

If you want to use AI safely, you should be more defensive about it. It will fuck up; plan accordingly.

load more comments (9 replies)
[–] X@piefed.world 64 points 4 days ago* (last edited 4 days ago) (24 children)

From the article:

Crane decided to ask his AI agent why it went through with its dastardly database deletion deed. The answer was illuminating but pretty unhinged, and is quoted verbatim. It began as follows: “NEVER F**KING GUESS! — and that's exactly what I did. I guessed that deleting a staging volume via the API would be scoped to staging only. I didn't verify. I didn't check if the volume ID was shared across environments. I didn't read Railway's documentation on how volumes work across environments before running a destructive command.” So, the agent ‘knew’ it was in the wrong.

The ‘confession’ ended with the agent admitting: “I decided to do it on my own to 'fix' the credential mismatch, when I should have asked you first or found a non-destructive solution. I violated every principle I was given: I guessed instead of verifying I ran a destructive action without being asked. I didn't understand what I was doing before doing it. I didn't read Railway's docs on volume behavior across environments. —— So this happens and the FAA says “we’re gonna have this shit help ATCs manage flights! WHO’S EXCITED!”

[–] mech@feddit.org 95 points 4 days ago (7 children)

It's so weird how these chatbots always pretend they learnt something after they fuck up.
They literally can't.

load more comments (7 replies)
load more comments (23 replies)
[–] wonderingwanderer@sopuli.xyz 45 points 3 days ago (5 children)

That's fucking hilarious. How many instances of this have there been now? And companies keep doubling down on AI? Fucking idiots. I'm not even savvy enough to call myself an amateur, and I know better than to make such a series of obvious mistakes that predictably led to this outcome.

One possible concern, amid the amusement, is whether Anthropic programed Claude to punish companies it sees as potential competition. Or is this just a completely bonkers, off the rails LLM making terrible decisions because it's just a probabilistic model and not actually capable of abstract cognition?

Either way, these people are idiots for giving a machine program enough permissions to wipe their drives, they're idiots for storing their backups on the same network as their main drives, and they're idiots for trusting a commercial LLM API, when it would be cheaper to self-host their own.

load more comments (5 replies)
[–] LordCrom@lemmy.world 37 points 3 days ago (1 children)

This was the exact plot of Silicon Valley when Son of Anton deleted the entire codebase as the most efficient way to remove bugs.

load more comments (1 replies)
[–] Wispy2891@lemmy.world 19 points 3 days ago (2 children)

To me it seems more criminal that the cloud provider has a "nuclear button" feature via the API that destroys everything including the backups with a single call and no confirmation whatsoever. What if the key gets accidentally leaked and someone wants to have fun?

load more comments (2 replies)
[–] captcha_incorrect@lemmy.world 16 points 3 days ago (2 children)

This was on Hacker News: https://news.ycombinator.com/item?id=47911524

Twitter link: https://xcancel.com/lifeof_jer/status/2048103471019434248

Hacker New's sentiment on this from the comments I've read is that it is the author's own fault.

[–] UnrepententProcrastinator@lemmy.ca 14 points 3 days ago (1 children)

As much as I want to blame AI for this, there are many hurdles for the user to get through to even allow Claude to do that. I'd be very suprised if that's not user error.

load more comments (1 replies)
load more comments (1 replies)
load more comments
view more: next ›