this post was submitted on 08 Apr 2026
471 points (99.2% liked)

Programmer Humor

30867 readers
1134 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] SkunkWorkz@lemmy.world 44 points 1 day ago (1 children)

The ffmpeg team was mad at Google when they reported a bug that was found and reported automatically with an AI. Google reported the bug without providing a fix and also gave an ultimatum. Google would publicize the bug report after 60 days. That’s what pissed off the ffmpeg devs. Not to mention that it was a very obscure bug, like ffmpeg didn’t decode a video file from a 90’s videogame correctly.

Anthropic on the other hand found a bug and provided a fix. So why would they be mad if the fix is properly written and fixes the bug ?

[–] Noam_Calhoun@lemmy.today 15 points 1 day ago (1 children)

Because people want to only back their tribe and not the truth.

[–] General_Effort@lemmy.world 3 points 1 day ago

It's really only a minority, or else the world would not work. Think how the theory of evolution gained mainstream acceptance, despite resistance by fanatics who had support by society,

[–] BuboScandiacus@mander.xyz 25 points 1 day ago

So they read them, and the patches were good (according to this message)

Why hate then?

[–] zieg989@programming.dev 148 points 2 days ago (2 children)

I would not be surprized if Anthropic would actually hire a real developer to make these PRs as a marketing stunt

[–] BestBouclettes@jlai.lu 176 points 2 days ago (1 children)

Well, if the model detected an issue, and a human tested it to make sure it was real and then fixed it, I think that's an acceptable use of AI tools.

[–] towerful@programming.dev 61 points 2 days ago

Yeh, AI as an assistant/tool. Not as a replacement

[–] testaccount789@sh.itjust.works 77 points 2 days ago (2 children)

In 2021, when Amazon launched its first “just walk out” grocery store in the UK in Ealing, west London, this newspaper reported on the cutting-edge technologies that Amazon said made it all possible: facial-recognition cameras, sensors on the shelves and, of course, “artificial intelligence”.
An employee who worked on the technology said that actual humans – albeit distant and invisible ones, based in India – reviewed about 70% of sales made in the “cashier-less” shops as of mid-2022

Source: The Guardian

UK AI company builder.ai has been tricking customers and investors for eight years – selling an advanced code-writing AI that, it turns out, is actually an Indian software farm employing 700 human developers.

Source: ACS Information Age

[–] Meron35@lemmy.world 12 points 2 days ago

AI: Actually Indians

[–] baguettefish@discuss.tchncs.de 11 points 2 days ago* (last edited 2 days ago)

builder AI was genuine AI, it's just that the company simultaneously also did contracted development with real humans. journalists got confused.

there's a really good youtube documentary i watched which actually got into the tools and software used, but I can't find it anymore. either way, you can't dress up humans coding as AI. it's not fast enough.

[–] railcar@midwest.social 57 points 2 days ago (1 children)

It's OK to hate AI slop and recognize the immediate threat to cyber security it brings. At least they are trying to mitigate it. There's been no similar actions from other frontier models. They are deliberately helping open source projects with little funding to keep pace.

https://www.anthropic.com/glasswing

[–] sunbeam60@feddit.uk 26 points 2 days ago (21 children)

Anthropic right now are the good people.

That probably won’t last. But out of a bad bunch they’re the least bad.

[–] 0xDREADBEEF@programming.dev 23 points 2 days ago* (last edited 2 days ago) (1 children)

the good people.

You are limiting your own intelligence by thinking companies can be described in those words.

They are not good. They are profit-seeking. Profit seeking doesn't necessarily mean evil, but it can never mean good. A non-profit who's goal is to improve their community around them, a co-op who's goal is to treat their workers with respect etc etc can all be described as 'good' to varying degrees, but no for-profit entity, especially a publicly traded one, can ever be described as 'good'

[–] hitmyspot@aussie.zone 9 points 1 day ago

Hence their point about being the best of a bad bunch. Remember the people making decisions are people. A corporation has no soul and only seeks profit. People work for them and can make good decisions and be good people whomever they work for.

There were good people that worked for the nazis. Unless you think the cleaner, for instance of the Nazi headquarters cleaned as a way to speak evil.

However. I take your point. I just think that's not what is the point of the discussion here and is no different to both sides being bad on politics. It lacks nuance.

[–] mojofrododojo@lemmy.world 1 points 1 day ago (1 children)
[–] onlinepersona@programming.dev 2 points 15 hours ago* (last edited 15 hours ago) (1 children)

Yes yes, not marketing at all. "It's so powerful, only those worthy enough can wield it." Make it so exclusive it seems illicit to acquire, that people will pay anything to join the club.

[–] fruitycoder@sh.itjust.works 2 points 4 hours ago

Or "we poorly implemented security controls for a system, it must have been so smart to have data leakage"

load more comments (19 replies)
[–] spectrums_coherence@piefed.social 68 points 2 days ago* (last edited 2 days ago) (2 children)

LLM is very good at programming when there are huge number of guardrails against them. For example, exploit testing is a great usecase because getting a shell is getting a shell.

They kind of acts as a smarter version of infinite monkey that can try and iterate much more efficiently than human does.

On the other hand, in tasks that requires creativity, architecture, and projects without guard rail, they tend to do a terrible job, and often yielding solution that is more convoluted than it needs to be or just plain old incorrect.

I find it is yet another replacement for "pure labor", where the most unintelligent part of programming, i.e. writing the code, is automated away. While I will still write code from scratch when I am trying to learn, I likely will be able automate some code writing, if I know exactly how to implement it in my head, and I also have access to plenty of testing to gaurentee correctness.

[–] Serinus@lemmy.world 40 points 2 days ago (2 children)

People have trouble with the middle ground. AI is useful in coding. It's not a full replacement. That should be fine, except you've got the ai techbros and CEOs on one end thinking it will replace all labor, and the you've got the backlash to that on the other end that want to constantly talk about how useless it is.

[–] sunbeam60@feddit.uk 9 points 2 days ago (1 children)

I’d buy you a beer for that summary. That is exactly SPOT ON.

[–] HeyThisIsntTheYMCA@lemmy.world 5 points 2 days ago* (last edited 2 days ago)

the times i trust LLMs: when i am using it to look up stuff i have already learned, but i can't remember and just need to refresh my memory. there's no point memorizing shit i can look up and am not going to use regularly, and i'm the effective guardrail against the LLMs being wrong when I'm using them.

the times i don't trust the LLMs: all the other times. if i can't effectively verify the information myself, why am i going to an unreliable source?

having to explain that nuance over and over, it's just shorter and easier to say the llm is an unreliable source. which it is. when i'm not doing lazy output, it doesn't need testing (it still gets at least 2 reviews, but the last time those reviews caught anything was years ago). the llm's output always needs testing.

[–] brianpeiris@lemmy.ca 5 points 2 days ago* (last edited 2 days ago) (1 children)

I suspect the problem is that there are many developers nowadays who don't care about code quality, actual engineering, and maintenance. So the people who are complaining are right to be concerned that there is going to be a ton of slop code produced by AI-bro developers, and the developers who actually care will be left to deal with the aftermath. I'd be very happy if lead developers are prepared to try things with AI, and importantly to throw the output away if it doesn't meet coding standards. Instead I think even lead developers and CTOs are chasing "productivity" metrics, which just translates to a ton of sloppy code.

load more comments (1 replies)
[–] RamenJunkie@midwest.social 6 points 2 days ago* (last edited 2 days ago)

They are also great for programming one off personal projects that frankly, don't have the use scale that needs rigerous security oversight. Especially since like, if you did it yourself, you probably were not sanitizing the inputs (etc) anyway. You were slapping down some Python code and moving on.

Like, I don't care if my script to convert Wordpress exports to Markdown files crashes if you feed it a JPEG. I am the only one using it, for this data manipulation task.

[–] General_Effort@lemmy.world 86 points 2 days ago

(In case someone has been living under a rock in the last 48 hours. Anthropic's new model "Mythos" has been finding a lot of new vulnerabilities. This is about patching one.)

[–] CannonFodder@lemmy.world 74 points 2 days ago (1 children)

ai tools can detect potential vulnerabilities and suggest fixes. You can still go in by hand and verify the problem carefully apply a fix.

[–] shirasho@feddit.online 30 points 2 days ago (1 children)

AI is actually SUPER good at this and is one of the few places I think AI should be used (as one of many tools, ignoring the awful environmental impacts of AI and assuming an on-prem model). AI is also good at detecting code performance issues.

With that said, all of the fix recommendations should be fixed by hand.

[–] _hovi_@lemmy.world 9 points 2 days ago (1 children)

Yeah I would add also ignoring how the training data is usually sourced. I agree AI can be useful but it just feels so unethical that I find it hard to justify.

I'm a big LLM hater atm but once we're using models that are efficient, local and trained on ethically sourced data I think I could finally feel more comfortable with it all. Can't be writing code for me though - why would I want the bot to do the fun part?

load more comments (1 replies)
[–] vk6flab@lemmy.radio 14 points 2 days ago (1 children)

Hold on, wasn't one of the "features" of the "leaked" Assumed Intelligence source code the "human"-like version?

[–] lIlIlIlIlIlIl@lemmy.world 15 points 2 days ago (1 children)

The leak was harness code, not agent weights. This is a new frontier model, not some CLI upgrade

[–] vk6flab@lemmy.radio 18 points 2 days ago* (last edited 2 days ago) (1 children)

I'm not sure if we're talking about the same thing. One of the recent leaks had code that pretended to be a developer, so you could pick if it submitted a PR as Assumed Intelligence, or as a person.

I'll see if I can find a reference.

Edit: Undercover Mode in Claude Code:

https://alex000kim.com/posts/2026-03-31-claude-code-source-leak/#undercover-mode-ai-that-hides-its-ai

[–] lIlIlIlIlIlIl@lemmy.world 6 points 2 days ago (1 children)

Ohh yes sorry. Would love to read about that one too if you happen to find it

load more comments (1 replies)
load more comments
view more: next ›