this post was submitted on 27 Apr 2026
333 points (98.5% liked)
Technology
84166 readers
3049 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Just as open weight models are getting good. Qwen 3.6 27B just dropped with claimed performance approaching Opus 4.6, but it can run on a Mac with a M-series SoC. I tested it out today on a M4 Pro with Ollama and Cline and was impressed with its reasoning, but it was slow. Going to try with llama.cpp tomorrow and mess around tweaking it for speed.
https://ai.rs/ai-developer/qwen-3-6-27b-local-coding-model
AI coding agents are useful, but it’s time for the cloud-based models to chill out so we can get cheap RAM again to run our shit locally.
It's almost like buying all the RAM so most people can only afford subscription services is the point.
Think of it like a happy little coincidence