this post was submitted on 17 Mar 2025
66 points (100.0% liked)

Apple

18709 readers
196 users here now

Welcome

to the largest Apple community on Lemmy. This is the place where we talk about everything Apple, from iOS to the exciting upcoming Apple Vision Pro. Feel free to join the discussion!

Rules:
  1. No NSFW Content
  2. No Hate Speech or Personal Attacks
  3. No Ads / Spamming
    Self promotion is only allowed in the pinned monthly thread

Lemmy Code of Conduct

Communities of Interest:

Apple Hardware
Apple TV
Apple Watch
iPad
iPhone
Mac
Vintage Apple

Apple Software
iOS
iPadOS
macOS
tvOS
watchOS
Shortcuts
Xcode

Community banner courtesy of u/Antsomnia.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 16 points 1 month ago (1 children)

Perhaps most impressively, the Mac Studio accomplishes this while consuming under 200 watts of power. Comparable performance on traditional PC hardware would require multiple GPUs drawing approximately ten times more electricity.

[…]

However, this performance doesn't come cheap – a Mac Studio configured with M3 Ultra and 512GB of RAM starts at around $10,000. Fully maxed out, an M3 Ultra Mac Studio with 16TB of SSD storage and an Apple M3 Ultra chip with 32-core CPU, 80-core GPU, and 32-core Neural Engine costs a cool $14,099. Of course, for organizations requiring local AI processing of sensitive data, the Mac Studio offers a relatively power-efficient solution compared to alternative hardware configurations.

I wonder what a multi-GPU x86-64 system with adequate RAM and everything would cost? If it’s less, how many kWh of electricity would it take for the Mac to save money?

[–] [email protected] 7 points 1 month ago (1 children)

It would be less if NVIDIA & AMD wasn't in an antitrust duopoly. You can get two XTX for less than $2000 with 48GB total VRAM.

[–] [email protected] 3 points 1 month ago (1 children)

Unfortunately getting an AI workload to run on those XTXs, and run correctly, is another story entirely.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago) (1 children)

ROCm has made a lot of improvements. $2000 for 48GB of VRAM makes up for any minor performance decrease as opposed to spending $2200 or more for 24GB VRAM with NVIDIA.

[–] [email protected] 1 points 1 month ago

ROCm certainly has gotten better, but the weird edge cases remain; alongside the fact that merely getting certain models to run is problematic. I am hoping that RNDA4 is paired with some tooling improvements. No more massive custom container builds, no more versioning nightmares. At my last startup we tried very hard to get AMD GPUs to work, but there were too many issues.