this post was submitted on 07 May 2026
501 points (99.6% liked)

Technology

84433 readers
3754 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] MagicShel@lemmy.zip 9 points 18 hours ago* (last edited 15 hours ago) (3 children)

It's a MacBook Pro. 36GB of ram. I am sure Macs have some kind of gpu and I understand it somehow combines GPU ram with system ram, but I don't really know Mac hardware very well.

It's beefy for a laptop, but the desktop I built for myself several years ago had 32 GB of ram and a GTX 1660, so I'm guessing they are similar in capability. I gave that to my daughter, so I can't run a comparison right now.

EDIT: After doing just a bit of research, I've learned the unified memory architecture that Macs use, while not ideal for many purposes, is actually a big advantage for running larger inference models. So it's possible that this particular model wouldn't run at all on my Linux box or would run much slower because the full model wouldn't fit in the 6GB of VRAM and create a lot of memory thrashing.

[–] boonhet@sopuli.xyz 2 points 5 hours ago

Yup, you want memory accessible to the GPU for local AI. AMD Strix Point and Mac devices are popular options. CPU can run LLMs but very slowly. I've got 32 GB of RAM and 8 VRAM and it's borderline useless for models that don't fit in the VRAM.

[–] SabinStargem@lemmy.today 3 points 13 hours ago (1 children)

You can use something like KoboldCPP on Linux, which allows both RAM and VRAM combined to run a model. O'course, not as fast when compared to pure VRAM or the Mac approach, but it is an option. I use my 128gb RAM with some GPUs for running models.

[–] boonhet@sopuli.xyz 1 points 5 hours ago

Ollama and llama.cpp allow it too but it's super slow in my experience.

[–] humanspiral@lemmy.ca 1 points 12 hours ago

decent performance on 6gb gpu without quantization: https://www.youtube.com/watch?v=8F_5pdcD3HY&t=9s