this post was submitted on 13 Dec 2025
77 points (84.7% liked)

Programming

24117 readers
404 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

Lemmings, I was hoping you could help me sort this one out: LLM's are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they're hallucinating?

Disclaimer: I'm a full time senior dev using the shit out of LLM's, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don't see "AI" taking my job, because I think that LLM's have already peaked, they're just tweaking minor details now.

Please don't ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA's.

Please don't kill me

you are viewing a single comment's thread
view the rest of the comments
[–] codeinabox@programming.dev 20 points 2 weeks ago (1 children)

Based on my own experience of using Claude for AI coding, and using the Whisper model on my phone for dictation, for the most part AI tools can be very useful. Yet there is nearly always mistakes, even if they are quite minor at times, which is why I am sceptical of AI taking my job.

Perhaps the biggest reason AI won't take my job is it has no accountability. For example, if an AI coding tool introduces a major bug into the codebase, I doubt you'd be able to make OpenAI or Anthropic accountable. However if you have a human developer supervising it, that person is very much accountable. This is something that Cory Doctorow talks about in his reverse-centaur article.

"And if the AI misses a tumor, this will be the human radiologist's fault, because they are the 'human in the loop.' It's their signature on the diagnosis."

This is a reverse centaur, and it's a specific kind of reverse-centaur: it's what Dan Davies calls an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.

[–] melfie@lemy.lol 2 points 2 weeks ago* (last edited 2 weeks ago)

This article / talk is quite illuminating. I’ve seen studies indicating that AI coding agents improve productivity by 15-20% in the aggregate, which tracks with my own experience. It’s a solid productivity boost when used correctly, clearly falling in the “centaur”category in my own experience at least. However, all the hate around it, my own included, stems from the “reverse-centaur” aspirations around it. The companies developing these tools aren’t in it to make a reasonable profit while delivering modest productivity gains. They are in it to spin a false narrative that these tools can replace 9/10 engineers in order to drive their own overly inflated valuations, knowing damn well this is not the case, but not caring because they don’t plan to be the ones holding the bag in the end (taxpayers will be the bag-holders when they get bailed out).