this post was submitted on 18 Nov 2025
164 points (95.1% liked)
Linux
10176 readers
623 users here now
A community for everything relating to the GNU/Linux operating system (except the memes!)
Also, check out:
Original icon base courtesy of lewing@isc.tamu.edu and The GIMP
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You can do hybrid inference of Qwen 30B omni for sure. Or fully offload inference of Vibevoice Large (9B). Or really a huge array of models.
...The limiting factor is free time, TBH. Just sifting through the sea of models, seeing if they work at all, testing if quantization works and such is a huge timesink, especially if you are trying to load stuff with rocm.
And I am on ROCm - specifically on an 8945HS, which is advertised as a Ryzen AI APU yet is completely unsupported as a target with major issues around queuing and more complex models (although the new 7.0 betas have been promising but TheRock's flip-flopping with their Docker images has been making me go crazy...).
Ah. On an 8000 APU, to be blunt, you're likely better off with Vulkan + whatever omni models GGML supports these days. Last I checked, TG is faster and prompt processing is close to rocm.
...And yeah, that was total misadvertisement on AMD's part. They've completely diluted the term kinda like TV makers did with 'HDR'
The thing is, if AMD actually added proper support for it, given it has a somewhat powerful NPU as well... For the total TDP of the package it's still one of the best perf per watt APU, just the damn software support isn't there.
Feckin AMD.
The IGP is more powerful than the NPU on these things anyway. The NPU us more for 'background' tasks, like Teams audio processing or whatever its used for on Windows.
Yeah, in hindsight, AMD should have tasked (and still should task) a few engineers on popular projects (and pushed NPU support harder), but GGML support is good these days. It's gonna be pretty close to RAM speed-bound for text generation.
Aye, I was actually hoping to use the NPU for TTS/STT while keeping the LLM systems GPU bound.
It still uses memory bandwidth, unfortunately. There's no way around that, though NPU TTS would still be neat.
...Also, generally, STT responses can't be streamed, so you mind as well use the iGPU anyway. TTS can be chunked I guess, but do the major implementations do that?
Piper does chunking for TTS, and could utilise the NPU with the right drivers.
And the idea of running them on the NPU is not about memory usage but hardware capacity/parallelism. Although I guess it would have some benefits when I don't have to constantly load/unload GPU models.
Oh, I forgot!
You should check out Lemonade:
https://github.com/lemonade-sdk/lemonade
It's supports Ryzen NPUs via 2 different runtimes... though apparently not the 8000 series yet?
I've actually been eyeing lemonade, but the lack of Dockerisation is still an issue... guess I'll just DIY it at one point.
It's all C++ now, so it doesn't really need docker! I don't use docker for any ML stuff, just pip/uv venvs.
You might consider Arch (dockerless) ROCM soon; it looks like 7.1 is in the staging repo right now.
Due to the fact I am running UnRaid on the node in question, I kinda do need Docker. I want to avoid messing with the core OS as much as possible, plus a Dockerised app is always easier to restore.
Yeah... Even if the LLM is RAM speed constrained, simply using another device to not to interrupt it would be good.
Honestly AMD's software dev efforts are baffling. They've focused on a few on libraries precisely no-one uses, like this: https://github.com/amd/Quark
While ignoring issues holding back entire sectors (like broken flash-attention) with devs screaming about it at the top of their lungs.
Intel suffers from corporate Game of Thrones, but at least they have meaningful contributions in the open source space here, like the SYCL/AMX llama.cpp code or the OpenVINO efforts.