this post was submitted on 03 Apr 2026
22 points (80.6% liked)

Selfhosted

58172 readers
571 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I realize, I need to upgrade my little NUC to something bigger for higher inference of bigger llama models. I want something that you still can have on your living room's tv bench, so no monster rack please, but that has also the necessary muscle when needed for llama. Budget doesn't matter right now, want to understand what's good and what's out there. Thanks

EDIT: Wow, thanks for the inspiration, guess I need to look at bit for "how to stuff a huge graphics card into a mini box". To clarify a bit more what I want with it: I want to build a responsive personal assistant. I am dreaming of models bigger than 8B, good tool calling for things like memory, websearch etc., no coding, no image generation, no video generation required. Image recognition would be good but not a must. Regarding footprint, the no monster ;) Something that you can have in your livingroom, and could be wife approved - so no big gaming rig with exhaust pipes and stuff, needs to be good looking ;)

you are viewing a single comment's thread
view the rest of the comments
[–] bazinga@discuss.tchncs.de 2 points 1 day ago (3 children)

Thank you for the detailed writeup. Are you aware of anything small footprint? I am thinking like dgx spark size maybe a bit bigger?

[–] ikidd@lemmy.world 1 points 5 hours ago

Memory bus speed of the Spark is poor and that's a huge detriment.

[–] anamethatisnt@sopuli.xyz 2 points 1 day ago* (last edited 1 day ago)

Problem with smaller footprint is cooling and how audible it becomes.
One idea is to use fiber optic hdmi cables and a usb extender to hide the pc away in another room.

If you want smaller footprint then the keyword to use is "Unified memory", it can be reasonable fast for 30B models and a slow thinker mode for 70B ones.

edit: example of a Unified Memory Apple Mac Studio can be found here at $5499 for 96GB RAM
https://www.apple.com/shop/buy-mac/mac-studio/m3-ultra-chip-32-core-cpu-80-core-gpu-96gb-memory-2tb-storage

[–] TheHolm@aussie.zone 2 points 1 day ago (1 children)

If you happy with 16g , nothing can beat in speed/cost of AMD RX 9070 XT.

[–] zergtoshi@lemmy.world 1 points 21 hours ago (1 children)

Wouldn't an AMD RX 9060 XT with 16 GB RAM be nice as well if you're hunting for good speed/cost options?

[–] TheHolm@aussie.zone 1 points 12 hours ago

Probably. It just not as fast as 9070 XT. I'm using 9070 XT myself and limitation for running LLMs is memory, not speed. If model fit in memory it will runs fast enough to be practical.