this post was submitted on 23 Mar 2026
516 points (97.8% liked)

Technology

83027 readers
3375 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

You're not productive if you don't use a lot of AI, says guy who makes all of his money selling AI hardware

you are viewing a single comment's thread
view the rest of the comments
[–] 8oow3291d@feddit.dk 1 points 22 hours ago* (last edited 22 hours ago) (2 children)

Well, yes, that is a central point.

I am a senior programmer. LLMs are amazing - I know exactly what I want, and I can ask for it and review it. My productivity has gone up at least 3-fold, with no decrease in quality, by using LLMs responsibly.

But it seems to me that some people on social media just can't imagine using LLMs in this way. They just imagine that all LLM usage is vibe coding, using the output without understanding or review. Obviously you are very unlikely to create any fundamentally new solutions if you only use LLMs that way.

only to find out you didn’t provide adequate requirements for your config.

Senior programmer. I know exactly what I want. My requirement communicated to the LLM are precise and adequate.

[–] baahb@lemmy.dbzer0.com 1 points 19 hours ago

Yes, this is why I point it out. I agree with you, but no part of this is actually common sense. It just feels like it.

[–] MangoCats@feddit.it 2 points 22 hours ago (1 children)

What I find LLMs doing for my software development is filling in the gaps. Thorough documented requirements coverage, unit test coverage, traceability, oh you want a step by step test procedure covering every requirement? No problem. Installer scripts and instructions. Especially the stuff we NEVER did back in the late 1980s/early 1990s LLMs are really good at all of that.

Nothing they produce seems 100% good to go on the first pass. It always benefits from / usually requires multiple refinements which are a combination of filling in missing specifications, clarifying specifications which have been misunderstood, and occasionally instructing it in precisely how something is expected to be done.

A year ago, I was frustrated by having to repeat these specific refinement instructions on every new phase of a project - the LLM coding systems have significantly improved since then, much better "MEMORY.md" and similar capturing the important things so they don't need to be repeated ALL THE TIME.

On the other hand, they still have their limits and in a larger recent project I have had to constantly redirect the agents to stop hardcoding every solution and make the solution data driven from a database.

[–] 8oow3291d@feddit.dk 3 points 22 hours ago (1 children)

I were simply unable to convince Codex to split a patch into separate git commits in a meaningful way. There are things that just doesn't work.

Still useful for lots of stuff. Just don't use it blind.

[–] MangoCats@feddit.it 3 points 21 hours ago

Never use it blind, and like I more or less said above: if you're taking the first response you're using it wrong. I go at it expecting everything it says to be 80% right, finding that 20% telling it what's wrong with it, then getting to 96% right - if the 4% off target is a problem, refine again...

Where it excels for me is generating long detailed (mind numbing) point by point descriptions of things - the kind of documents you can skim to see where they are right and wrong but would fall asleep or have a case of terminal ADHD before finishing creating them on your own.