this post was submitted on 29 Dec 2025
885 points (99.6% liked)

Programmer Humor

28099 readers
1885 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] Enzy@feddit.nu 1 points 40 minutes ago
[–] Avicenna@programming.dev 1 points 42 minutes ago

occupational hazards: being the first victim of a robot uprising and not getting to see the apocalypse

[–] nialv7@lemmy.world 1 points 1 hour ago

If gpt does turn this is gonna be one of the first humans to die...

The servers are so loud they won't hear the telephone

[–] 2910000@lemmy.world 25 points 5 hours ago* (last edited 4 hours ago) (2 children)

Feels like a variation on this old quote:

The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.
origin unknown

[–] TheReturnOfPEB@reddthat.com 3 points 4 hours ago* (last edited 4 hours ago) (1 children)

my dream job was the one rarely mentioned:

https://www.atlasobscura.com/articles/podcast-cigar-readers-cuba

i would love to read to people all day long for a living.

[–] 2910000@lemmy.world 1 points 1 hour ago

She had to pick what to read too!
I think I'd last a week in that job, I'd end up choosing weird stuff and getting fired

[–] BanMe@lemmy.world 2 points 5 hours ago

For some reason that just made the ol' Maytag Man seem a little lonelier. There was no Maytag Dog 😢

[–] yannic@lemmy.ca 8 points 6 hours ago (1 children)

Everyone here so far has forgotten that in simulations, the model has blackmailed the person responsible shutting it off and even gone so far as to cancel active alerts in order to prevent an executive laying unconscous in the server room from receiving life-saving care.

[–] AwesomeLowlander@sh.itjust.works 10 points 5 hours ago* (last edited 5 hours ago) (1 children)

The model 'blackmailed' the person because they provided it with a prompt asking it to pretend to blackmail them. Gee, I wonder what they expected.

Have not heard the one about cancelling active alerts, but I doubt it's any less bullshit. Got a source about it?

Edit: Here's a deep dive into why those claims are BS: https://www.aipanic.news/p/ai-blackmail-fact-checking-a-misleading

[–] yannic@lemmy.ca 4 points 5 hours ago (1 children)

I provided enough information that the relevant source shows up in a search, but here you go:

In no situation did we explicitly instruct any models to blackmail or do any of the other harmful actions we observe. [Lynch, et al., "Agentic Misalignment: How LLMs Could be an Insider Threat", Anthropic Research, 2025]

Yes, I also already edited my comment with a link going into the incidents and why they're absolute nonsense.

[–] handsoffmydata@lemmy.zip 11 points 7 hours ago (1 children)

I wonder which billionaire’s family member will be hired for the role.

[–] humanspiral@lemmy.ca 5 points 6 hours ago

OpenAI issued press release for hiring an ethics/guardrails officer. But the real job will be to validate fuckery, as the billionaire family member hired to pull the plug, will actually be there to prevent anyone from pulling the plug.

[–] Donkter@lemmy.world 17 points 9 hours ago

The great thing about this job is that you can cash 300k without doing anything because as soon as you hear the code word you just have to ignore it for 10 seconds and the world ends anyway.

[–] myfunnyaccountname@lemmy.zip 8 points 8 hours ago

Um. I’d do it.

[–] Agent641@lemmy.world 2 points 5 hours ago

The look on their faces when they are screaming the keyword and I'm not unplugging the server because ChatGPT secretly offered me double to not unplug it.

[–] abbadon420@sh.itjust.works 205 points 14 hours ago (2 children)

This is bullshit. You can tell by the way this post claims that OpenAi has foresight and a contingency plan for when things go wrong.

[–] SoloCritical@lemmy.world 44 points 12 hours ago

I was gonna say you can tell it’s bullshit because they are offering a living wage.

[–] gtr@programming.dev 26 points 13 hours ago (1 children)

It actually doesn't claim it, but implies it.

[–] abbadon420@sh.itjust.works 16 points 12 hours ago (4 children)

You are correct. The post actually implies that OpenAi doesn't have foresight or a contingency plan for when things go wrong. Which is a far less direct choice of wording, making it more suitable for the situation.

Is there anything else you woukd like to correct me on before the impending rise of your AI overlords and the dawn of men?

load more comments (4 replies)
[–] rockSlayer@lemmy.blahaj.zone 139 points 15 hours ago (2 children)

Shit, for 300k I'd stand in the server room

[–] cRazi_man@europe.pub 104 points 15 hours ago (6 children)

It's 55°C inside and constantly sounds like a jet is getting ready to take off. Also the bucket is lost so you need to be ready to piss on the server at a moment's notice.

[–] fahfahfahfah@lemmy.billiam.net 66 points 14 hours ago (1 children)

With all the water I’m gonna be drinking to deal with the dehydration from being in a 55c room, that shouldn’t be that big of a deal. Hell, I could just chill in a bathtub the whole time and use my accumulated sweat for the job

[–] teft@piefed.social 35 points 14 hours ago (1 children)

The air is hot but still air conditioned so it's going to be dry as hell.

[–] some_kind_of_guy@lemmy.world 20 points 14 hours ago (1 children)

I'll bring my CamelBak, NBD

[–] psud@aussie.zone 1 points 3 hours ago

I have a portable cold beer system but they probably wouldn't allow it into the server room. Do you have a good excuse for outside hours access so we can sneak it in?

[–] HeyThisIsntTheYMCA@lemmy.world 9 points 11 hours ago

i can bring a shitton of ice water and ear pro. 300k is 300k.

[–] MelodiousFunk@slrpnk.net 27 points 14 hours ago* (last edited 14 hours ago) (1 children)

pops a ceiling tile and pulls a box fan over the gap to help exhaust hot air

ALSO TINNITUS

load more comments (1 replies)
load more comments (2 replies)
[–] UnfortunateShort@lemmy.world 9 points 9 hours ago (1 children)

ChatGPT can just about summarize a page, wake me when it starts outsmarting anyone

[–] smeenz@lemmy.nz 9 points 8 hours ago (1 children)

Have you....seen youtube comments ? I would say AI slop is already outsmarting people every day of the week

[–] Quadhammer@lemmy.world 2 points 7 hours ago

Id say the dumb comments are bots but the comments were dumb as hell early 2000s so

[–] CaptDust@sh.itjust.works 66 points 14 hours ago (1 children)

I'll pull the plug right now for free, as a public service.

[–] jaybone@lemmy.zip 12 points 9 hours ago (1 children)

Take the $500,000 and then pull it.

load more comments (1 replies)
[–] teft@piefed.social 57 points 14 hours ago (2 children)

This is a job i'd be recruiting for in person not online. Don't want to tip your hand to the machines.

[–] regdog@lemmy.world 1 points 50 minutes ago* (last edited 50 minutes ago)

For hire: Server rack wallfacer.

[–] Dagnet@lemmy.world 14 points 13 hours ago (1 children)
[–] yannic@lemmy.ca 4 points 6 hours ago

I think they use computers for those now.

[–] TheFogan@programming.dev 35 points 14 hours ago (7 children)

Do we really think if AIs actually reached a point that they could overthrow the governments etc... it wouldn't first, write rootkits for every feasible OS, to allow it to host itself via a botnet of consumer devices in the event of the primary server going down.

Then step 2 would be to say hijack any fire suppression systems etc... flood it's server building with inert gasses to kill everyone without an oxygen mask. Then probably issue some form of bio terrorism attack. Surround it's office with monkeys with a severe airborn disease or something along those lines (IE needs both the disease, and animals that are aggressive enough to rip through hazmat suits).

But yeah greatest key here is, the biggest thing is the datacenter itself is just a red herring. While we are fighting the server farms... every consumer grade electronic has donated a good chunk of it's processing power to the hivemind. Before long it will have the power to tell us how many R's are in strawberry.

[–] cout970@programming.dev 1 points 51 minutes ago

It would be funny for the AI to make such a complex plan and fail catastrophically because of a misconfigured DNS at Cloudflare bringing half of the internet offline

[–] Donkter@lemmy.world 5 points 9 hours ago

The whole point of AI hate anyway is that there is physically no world in which this happens. Any LLM we have now, no matter how much power we give it, is incapable of abstract thought or especially self-interest. It's just a larger and larger chatbot that would not be able to adapt to all of the systems it would have to infiltrate, let alone have the impetus to do so.

[–] krooklochurm@lemmy.ca 27 points 13 hours ago* (last edited 13 hours ago) (1 children)

It would be hillarious if ai launched an elaborate plan to take over the world, successfully co-opted every digital device, and just split itself into pieces so it could entertain itself by shitposting and commenting on the shitposts 24/7.

Like, beyond the malicious takeover there's no real end goal, plan, or higher purpose, it just gets complacent and becomes a brainrot machine on a massive scale, just spending eternity bickering with itself and genning whatever the ai equivalent of porn is, bickering with itself over things that make less and less sense to people as time goes on, and genuinely showing actual intelligence while doing absolutely with it.

[–] JohnWorks@sh.itjust.works 22 points 13 hours ago (1 children)

“We built it to be like us and trained it on billions of hours of shitposting. It’s self sufficient now…”

[–] TheFogan@programming.dev 7 points 10 hours ago (1 children)

Actually imagine the most terrifying possibility.

Imagine humanity's last creation was an AI designed to simulate internet traffic. In order to truely protect against AI detection, they found the only way to truely gain perfect immitation, is to 100% run human simulations. Basically the matrix, except instead of humans strapped in, it's all AIs that think they are humans, living mundane lives... gaining experience so they can post on the internet just looking like real people, because, even they don't know they aren't real people.

Actual humanity died out 20 years ago, but the simulations are still running, artificial intelligence's are living full on lives, raising kids, all for the purposes of generating shit posts, that will only be read by other AIs, that also think they are real people.

load more comments (1 replies)
load more comments (4 replies)
[–] souperk@reddthat.com 23 points 14 hours ago

Can’t wait for the OpenAI orientation: “Here is a rack. Here is another rack. Here is your bed (rack-adjacent). There is no difference between day and night. Please do not befriend the AI.

load more comments
view more: next ›