this post was submitted on 23 Feb 2026
145 points (92.4% liked)
Programming
25758 readers
372 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
While it took me a few months to really notice it, that still shocked me. Using AI extensively makes you depend on it - and that's exactly what the big players want. A customer paying a recurring subscription just to do their job.
Since I am not forced to use it, I deleted my OpenAI account and started to code without LLM assistance again. It's much more fun to solve problems by myself (and get a dopamine kick out of that) anyway - and when the bubble inevitably pops, I can still go on as I did before.
Local models will win. They're half-assed, but the big boys only provide fractionally more ass. LLMs will become just another tool you can call on when you'd rather read code than write it.
I really hope so, but for that to happen, hardware prices have to go down again and that might take a while.
The fuck are people downvoting for? 8 GB and no CUDA is sufficient for a variety of LLMs. That comparison's from a year and a half ago, which is forever in this industry, but it's not like small models got worse.
This mildly terrible website shows Min Istral 3B benchmarking above state-of-the-art DeepSeek R1 32B from ten months prior. And also above the 72B version of Qwen 2.5, whose 3B version was top-of-the-list for the ItsFoss guy.
A Raspberry Pi can run local models. You don't need 64 gigs and a 5090.