this post was submitted on 13 Dec 2025
77 points (84.7% liked)

Programming

24097 readers
289 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

Lemmings, I was hoping you could help me sort this one out: LLM's are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they're hallucinating?

Disclaimer: I'm a full time senior dev using the shit out of LLM's, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don't see "AI" taking my job, because I think that LLM's have already peaked, they're just tweaking minor details now.

Please don't ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA's.

Please don't kill me

you are viewing a single comment's thread
view the rest of the comments
[–] Quetzalcutlass@lemmy.world 76 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

It takes jobs because executives push it hoping to save six figures per replaced employee, not because it's actually better. The downsides of AI-written code (that it turns a codebase into an unmaintainable mess whose own "authors" won't have a solid mental model of it since they didn't actually write it) won't show up immediately, only when something breaks or needs to be changed.

It's like outsourcing - it looks promising and you think you'll save a ton of money, until months or years later when the tech debt comes due and nobody in the company knows how to fix it. Even if the code was absolutely flawless, you still need to know it to maintain it.

[–] monounity@lemmy.world -3 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

So you're not in the "they're only hallucinating" camp, I take it? I actually start out with a solid mental model of what I want to do, ending up with small unit tested classes/functions that all pass code review. It's not like I just tell an "AI" to write the whole thing and commit and push without reviewing myself first.

Edit: and as I commented elsewhere in this thread, the way I'm using LLM's, no one could tell that an LLM ever was involved.

[–] southernbeaver@lemmy.world 20 points 2 weeks ago (3 children)

I wouldn't listen to anyone who deal in absolutes. Could be a sith.

But for real. My manager has explained it best. It's a tool, you can use to enhance your work. That's it. It won't replace good coders but it will replace bad ones because the good ones will be more efficient

[–] partial_accumen@lemmy.world 35 points 2 weeks ago (4 children)

It won’t replace good coders but it will replace bad ones because the good ones will be more efficient

Here's where we just start touching on the second order problem. Nobody starts as a good coder. We start making horrible code because we don't know very much, and though years of making mistakes we (hopefully) improve, and become good coders.

So if AI "replaces bad ones" we've effectively ended the pipeline for new coders to enter the workforce. This will be fine for awhile as we have two to three generations of coders that grew up (and became good coders) prior to AI. However, that most recent generation that was pre-AI is that last one. The gate is closed. The ladder pulled up. There won't be any more young "bad ones" that grow up into good ones. Then the "good ones" will start to die off or retire.

Carried to its logical conclusion, assuming nothing else changes, then there aren't any good ones, nor will there every be again.

[–] monounity@lemmy.world 6 points 2 weeks ago* (last edited 2 weeks ago)

At least where I work, we're actively teaching the junior devs on best practices and patterns that are tried and true. Like no code copying, small classes with one task, small methods with one task, separating logic from the database/presentation, unit testing etc.

Edit: actively, not actually

[–] southernbeaver@lemmy.world 2 points 2 weeks ago

I agree. In the long run it will hurt everyone.

[–] Tollana1234567@lemmy.today 2 points 2 weeks ago (1 children)

then they will try to squeeze the ones that are sitll employed harder, because they "couldnt" find any fresh coders out of college or whatever training they did.

[–] partial_accumen@lemmy.world 4 points 2 weeks ago

That will backfire on employers. With the shortage of seniors with good skills, the demand will rise for them. An employer that squeezes his seniors will find them quitting because there will be another desperate employer that will treat them better.

[–] VoterFrog@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago)

There are bad coders and then there are bad coders. I was a teaching assistant through grad school and in the industry I've interviewed the gamut of juniors.

There are tons of new grads who can't code their way out of a paper bag. Then there's a whole spectrum up to and including people who are as good at the mechanics of programming as most seniors.

The former is absolutely going to have a hard time. But if you're beyond that you should have the skills necessary to critically evaluate an agent's output. And any more time that they get to instead become involved in the higher level discussions going on around them is a win in my book.

[–] cloudy1999@sh.itjust.works 3 points 2 weeks ago

The Force is strong with this one.

[–] pinball_wizard@lemmy.zip 1 points 2 weeks ago

I actually start out with a solid mental model of what I want to do, ending up with small unit tested classes/functions that all pass code review.

You said elsewhere that you're not correcting the AI, haha. Sounds like you only don't need to correct it because you're guiding it away from it's own weak spots.

So don't sell yourself short.

The AI hate here is because it is oversold to people who will only make a mess with it. It can be lovely in the right hands.

It's mostly in the wrong hands, today.

[–] henfredemars@infosec.pub 0 points 2 weeks ago

It sounds to me like you’ve got a good head on your shoulders and you’re actually using the tool effectively. You’re keeping yourself in control and using it to expand your own capabilities, not offloading your job responsibilities, which is how more inept management views AI.