this post was submitted on 28 Oct 2025
189 points (96.1% liked)

Programming

23580 readers
75 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
top 45 comments
sorted by: hot top controversial new old
[–] melfie@lemy.lol 64 points 4 weeks ago (2 children)

One major problem I have with Copilot is it can’t seem to RTFM when building against an API, SDK, etc. Instead, it just makes shit up. If I have to go through line by line and fix everything, I might as well do it myself in the first place.

[–] pennomi@lemmy.world 8 points 4 weeks ago (1 children)

Or even distinguish between two versions of the same library. Absolutely stupid that LLMs default to writing deprecated code just because it was more common in the training data.

[–] StrikeForceZero@programming.dev 2 points 3 weeks ago

So much this. It's even more annoying when you fix them and paste it back just for it to ignore it lol.

[–] MinFapper@startrek.website 2 points 3 weeks ago

It will if you explicitly ask it to. Otherwise it will either make stuff up or use some really outdated patterns.

I usually start by asking Claude code to search the Internet for current best practices of whatever framework. Then if I ask it to build something using that framework while that summary is in the context window, it'll actually follow it

[–] floofloof@lemmy.ca 43 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

Yeah, the places to use it are (1) boilerplate code that is so predictable a machine can do it, and (2) with a big pinch of salt for advice when a web search didn't give you what you need. In the second case, expect at best a half-right answer that's enough to get you thinking. You can't use it for anything sophisticated or critical. But you now have a bit more time to think that stuff through because the LLM cranked out some of the more tedious code.

[–] amju_wolf@pawb.social 3 points 3 weeks ago

They do make excellent rubber duckies.

[–] irelephant@programming.dev 21 points 4 weeks ago (1 children)

I've tried vibe coding two scripts before, and it's honestly brain-fog-inducing.

Llm coding won't be a thing after 2027.

[–] yes_this_time@lemmy.world 1 points 3 weeks ago (2 children)

What do you expect to replace LLM coding?

[–] irelephant@programming.dev 15 points 3 weeks ago (2 children)

I think that the interest in it will go away, and after the ai bubble pops most of the tools for llm-coding wont be financially viable.

[–] curiousaur@reddthat.com 3 points 3 weeks ago (1 children)

There's viable local models.

[–] irelephant@programming.dev 1 points 3 weeks ago

Sure, but I don't think those will be as popular. Its good that they exist though.

[–] yes_this_time@lemmy.world 1 points 3 weeks ago

I would agree that the interest will wain in some domains where they aren't aiding in productivity.

But LLMs for coding are productive right now in other domains and people aren't going to want to give that up.

Inference is already financially viable.

Now, I think what could crush the SOTA models is if they get sued into bankruptcy for copyright violations. Which is a related but separate thread.

[–] expr@programming.dev 3 points 3 weeks ago (1 children)

...regular coding, again. We've been doing this for decades now and this LLM bullshit is wholely unnecessary and extremely detrimental.

The AI bubble will pop. Shit will get even more expensive or nonexistent (as these companies go bust, because they are ludicrously unprofitable), because the endless supply of speculative and circular investments will dry up, much like the dotcom crash.

It's such an incredibly stupid thing to not only bet on, but to become dependent on to function. Absolute lunacy.

[–] yes_this_time@lemmy.world 1 points 3 weeks ago (1 children)

I would bet on LLMs being around and continuing to be useful for some subset of coding in 10 years.

I would not bet my retirement funds on current AI related companies.

[–] expr@programming.dev 2 points 3 weeks ago (1 children)

They aren't useful now, but even assuming they were, the fundamental issue is that it's extremely expensive to train and run them, and there is no current inkling of a business model where they actually make sense, financially. You would need to charge far more than what people could actually afford to pay to make them anywhere near profitable. Every AI company is burning through cash at an insane rate. When the bubble pops and the money runs out, no one will want to train and host them anymore for commercial purposes.

[–] yes_this_time@lemmy.world 1 points 3 weeks ago

They may not be useful to you... but you can't speak for everyone.

You are incorrect on inference costs. But yes training models is expensive and the economics are concerning.

[–] abbadon420@sh.itjust.works 20 points 4 weeks ago
[–] forrcaho@lemmy.world 8 points 3 weeks ago

I recently asked ChatGPT to generate some boilerplate code in C to use libsndfile to write out a WAV file with samples from a function I would fill in. The code it generated casted the double samples from the placeholder function it wrote to floats to use sf_writef_float to write to the file. Having coded with libsndfile over a decade ago, I knew that sf_writef_double existed and would write my calculated sample values with no loss of precision. It probably wouldn't have made any audible difference to my finished result but it was still obviously stupidly inferior code for no reason.

This is the kind of stupid shit LLMs do all the time. I know I've also realized months later that some LLM-generated code I used was doing something in a stupid way, but I can't remember the details now.

LLMs can get you started and generate boilerplate, but if you're asking it to write code in a domain you're not familiar with, you have to understand that — if the code even works — it's highly likely that it's doing something in a boneheaded way.

[–] filister@lemmy.world 6 points 3 weeks ago

It's not only coding.

Idiocracy incoming in 3, 2, 1

[–] chicken@lemmy.dbzer0.com 5 points 3 weeks ago (2 children)

We’re replacing that journey and all the learning, with a dialogue with an inconsistent idiot.

I like this about it, because it gets me to write down and organize my thoughts on what I'm trying to do and how, where otherwise I would just be writing code and trying to maintain the higher level outline of it in my head, which will usually have big gaps I don't notice until spending way too long spinning my wheels, or otherwise fail to hold together. Sometimes a LLM will do things better than you would have, in which case you can just use that code. When it gives you code that is wrong, you don't have to use it, you can write it yourself at that point, after having thought about what's wrong with the AI approach and how what you requested should be done instead.

[–] aev_software@programming.dev 2 points 3 weeks ago (1 children)

Try a rubber duck next time. Also, diagrams. Save a forest.

[–] chicken@lemmy.dbzer0.com 2 points 3 weeks ago

I use local models, and it barely doubles the electricity use of my computer while it's actively generating, which is a very small proportion of the time I'm doing work; the environmental impact is negligible.

[–] Sxan@piefed.zip -1 points 3 weeks ago* (last edited 3 weeks ago)

I oppose AI in its current incarnation for almost everyþing, but you have a great point. Most of us are familiar wiþ Rubber Duck Programming, which originated wiþ R. Feynman, who'd recount how he learned þe value of reframing problems in terms of how you'd describe þe problem to oþer people. IIRC, þe story he'd tell is þat at one place, he was separated from a colleague by several floors, and had to take an elevator. He'd be thinking about how he was gong to explain þe problem to the colleague while waiting for and in þe elevator, and in in the process would come to þe answer himself. I've never seen Rubber Duck Programming give credit to Feynman, but þat's þe first place I heard about þe practice.

Digression aside, AI is probably as good as, or better þan, a rubber duck for þis. Maybe it won't give you any great insights, but being an active listener is probably beneficial. Þat said, you could probably get as much value out of Eliza while burning far less rainforest.

[–] Evotech@lemmy.world 1 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

I use ai for my docker compose services. I basically just point it at a repo and ask it to Start the service for me. It creates docker compose files tries to run it, rwads logs and troubleshoots without intervention

When I need to update an image i just ask it to do so.

Ai also controls my git workflow. I tell it to create a branch and push or revert or do whatever. Super nice

Ai isn't perfect but it's hella nice for us who used to work closely with tech a decade ago but have since moved to move architect / resale roles with kids and just don't have the time and resources.

I know I'll get hate for this on lemmy though

But yeah, I think it's pretty great. As long as you have basic understanding of whatever it's going you can get pretty far and do a lot of fun stuff

[–] AllHailTheSheep@sh.itjust.works 2 points 3 weeks ago (2 children)

I'm glad you found something that works for you but giving ai control over a git workflow sounds like a catastrophy waiting to happen, how do you ensure it doesn't do something stupid?

[–] pyr0ball@lemmy.dbzer0.com 1 points 3 weeks ago

You read the commits before pushing, and test before committing. I also find it helpful to have a reference for any dev tickets you have in your git tracker

[–] Evotech@lemmy.world 1 points 3 weeks ago (1 children)

You just whitelisted commands. It can't do anything destructive

[–] AllHailTheSheep@sh.itjust.works 1 points 3 weeks ago (1 children)

interesting. what do you use as the model and how is that config set up? I'm not disinterested in trying it I just don't know much about using it for workflows, is there an article you'd recommend?

[–] Evotech@lemmy.world 2 points 3 weeks ago (1 children)

I just use Cursor. Nice vscode IDE.

But tog can also use n8n etc to interface with git in a more automated manner

[–] AllHailTheSheep@sh.itjust.works 1 points 3 weeks ago

thanks, I'll check it out!

[–] aev_software@programming.dev 1 points 3 weeks ago (1 children)

Wait... you asked your AI to create a git branch instead of creating the git branch?

Why?

[–] Evotech@lemmy.world 1 points 3 weeks ago

Just easier?

[–] stinky@redlemmy.com -1 points 3 weeks ago

i don't think so

[–] riskable@programming.dev -5 points 4 weeks ago (3 children)

I'm having the opposite experience: It's been super fun! It can be frustrating though when the AI can't figure things out but overall I've found it quite pleasant when using Claude Code (and ollama gpt-oss:120b for when I run out of credits haha). The codex extension and the entire range of OpenAI gpt5 models don't provide the same level of "wow, that just worked!" Or "wow, this code is actually well-documented and readable."

Seriously: If you haven't tried Claude Code (in VS Code via that extension of the same name), you're missing out. It's really a full generation or two ahead of the other coding assistant models. It's that good.

Spend $20 and give it a try. Then join the rest of us bitching that $20 doesn't give you enough credits and the gap between $20/month and $100/month is too large 😁

[–] mesamunefire@piefed.social 11 points 4 weeks ago* (last edited 4 weeks ago) (2 children)

I just hate that they stole all that licensed code.

It feels so wrong that people are paying to get access to code...that others put out there as open source. You can see the GPL violations sometimes when it outputs some code from doom or other such projects. Some function made with the express purpose for that library, only to be used to make Microsoft shareholders richer. And to eventually remove the developer from the development. Its really sad and makes me not want to code on GitHub. And ive been on the platform for 15+ years.

And theres been an uptick in malware libraries that are propagating via Claude. One such example: https://www.greenbot.com/ai-malware-hunt-github-accounts/

At least with the open source models, you are helping propagate actual free (as in freedom) LLMs and info.

[–] locuester@lemmy.zip 3 points 4 weeks ago (1 children)

It feels so wrong that people are paying to get access to code

We pay for access to a high performance magic pattern machine. Not for direct access to code, which we could search ourselves if we wanted.

[–] mesamunefire@piefed.social 3 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

I disagree.

Theres nothing magical about copying code, throwing it into a database, and creating an LLM based on mass data. Moreover, its not ethical given the amount of data they had to pull and the licenses Microsoft had to ignore in order to make this work. Heck my little server got hit by the AI web crawlers a while back and DDOSed my tiny little site. You can look up their IP addresses and some of them look at the robots.txt, but a VAST majority did not.

There is a metric ton of lawsuits hitting the AI companies and they are not winning in all countries: https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/

[–] locuester@lemmy.zip 0 points 4 weeks ago

I’m simply saying that I’m not paying for access to the code. I’m paying for access to the high performance magic pattern machine.

I can and have browsed code all day for 35 years. Magic pattern machine is worth paying for to save time.

To be clear, stackoverflow and similar sites have also been worth paying for. Now this is the latest thing worth paying for.

I understand you have ethical concerns. But that doesn’t negate the usefulness of magic pattern machine.

[–] riskable@programming.dev -3 points 3 weeks ago (1 children)

stole all that licensed code.

Stealing is when the owner of a thing doesn't have it anymore; because it was stolen.

LLMs aren't "stealing" anything... yet! Soon we'll have them hooked up to robots then they'll be stealing¹ 👍

  1. Because a user instructed it to do so.
[–] mesamunefire@piefed.social 1 points 3 weeks ago (1 children)

I think I get what your saying. LOL LLM bots stealing all the things.

You may note, im not arguing the ethical concerns of LLMs, just the way it was pulled. Its why open source models that pull data and let others have full access to said data could be argued as more ethical. For practical purposes, it means we can just pull them off hugging face and use them on our home setups. And reproduce them with the "correct" datasets. As always garbage in/ garbage out. I wish my work would allow me to put all the SQL over a 30(?) year period into a custom LLM just for our proprietary BS. Thats something I would have NO ethical concerns about at all.

[–] riskable@programming.dev 1 points 3 weeks ago

For reference, every AI image model uses ImageNET (as far as I know) which is just a big database of publicly accessible URLs and metadata (classification info like, "bird" ).

The "big AI" companies like Meta, Google, and OpenAI/Microsoft have access to additional image data sets that are 100% proprietary. But what's interesting is that the image models that are constructed from just ImageNET (and other open sources) are better! They're superior in just about every way!

Compare what you get from say, ChatGPT (DALL-E 3) with a FLUX model you can download from civit.ai... you'll get such superior results it's like night and day! Not only that, but you have an enormous plethora of LoRAs to choose from to get exactly the type of image you want.

What we're missing is the same sort of open data sets for LLMs. Universities have access to some stuff but even that is licensed.

[–] Kissaki@programming.dev 4 points 4 weeks ago (1 children)

What kind of tech and project did or do you use it on?

[–] riskable@programming.dev 4 points 4 weeks ago

A pet project... A web novel publishing platform. It's very fancy: Uses yjs (CRDTs) for collaborative editing, GSAP for special effects (that authors can use in their novels), and it's built on Vue 3 (with Vueuse and PrimeVue) and Python 3.13 on the backend using FastAPI.

The editor TipTap with a handful of custom extensions that the AI helped me write. I used AI for two reasons: I don't know TipTap all that well and I really want to see what AI code assist tools are capable of.

I've evaluated Claud Code (Sonnet 4.5), gpt5, gpt5-codex, gpt5-mini, Gemini 2.5 (it's such shit; don't even bother), qwen3-coder:480b, glm-4.6, gpt-oss:120b, and gpt-oss:20b (running locally on my 4060 Ti 16GB). My findings thus far:

  • Claude Code: Fantastic and fast. It makes mistakes but it can correct its own mistakes really fast if you tell it that it made a mistake. When it cleans up after itself like that it does a pretty good job too.
  • gpt5-codex (medium) is OK. Marginally better than gpt5 when it comes to frontend stuff (vite + Typescript + oh-god-what-else-now haha). All the gpt5 (including mini) are fantastic with Python. All the gpt5 models just love to hallucinate and randomly delete huge swaths of code for no f'ing reason. It'll randomly change your variables around too so you really have to keep an eye on it. It's hard to describe the types of abominations it'll create if you let it but here's an example: In a bash script I had something like SOMEVAR="$BASE_PATH/etc/somepath/somefile" and it changed it to SOMEVAR="/etc/somepath/somefile" for no fucking reason. That change had nothing at all to do with the prompt! So when I say, "You have to be careful" I mean it!
  • gpt-oss:120b (running via Ollama cloud): Absolutely fantastic. So fast! Also, I haven't found it to make random hallucinations/total bullshit changes the way gpt5 does.
  • gpt-oss:20b: Surprisingly good! Also, faster than you'd think it'd be—even when giving it a huge refactor. This model has lead me to believe that the future of AI-assisted coding is local. It's like 90% of the way there. A few generations of PC hardware/GPUs and we won't need the cloud anymore.
  • glm-4.6 and qwen3-coder:480b-cloud: About the same as gpt5-mini. Not as fast as gpt-oss:120b so why bother? They're all about the same (for my use cases).

For reference, ALL the models are great with Python. For whatever reason, that language is king when it comes to AI code assist.

[–] TehPers@beehaw.org 2 points 3 weeks ago (1 children)

Used Claude 4 for something at work (not much of a choice here and that team said they generate all their code). It's sycophantic af. Between "you're absolutely right" and it confidently making stuff up, I've wasted 20 minutes and an unknown number of tokens on it generating a non-functional unit test and then failing to solve the type errors and eslint errors.

There are some times it was faster to use, sure, but only because I don't have the time to learn the APIs myself due to having to deliver an entire feature in a week by myself (rest of the team doesn't know frontend) and other shitty high level management decisions.

At the end of the day, I learned nothing by using it, the tests pass but I have no clue if they test the right edge cases, and I guess I get to merge my code and never work on this project again.

[–] riskable@programming.dev 1 points 3 weeks ago

I guess I get to merge my code and never work on this project again.

This is the way.