this post was submitted on 19 May 2025
61 points (100.0% liked)
LocalLLaMA
2972 readers
92 users here now
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Oh yeah, you will run into a ton of pain sampling random projects on AMD/Intel. Most "experiments" only work out of the box on Nvidia. Some can be fixed, some can't.
A used 3090 is like gold if you can find one, yeah.
And yes, I sympathize with Nvidia being a pain on linux... though it's not so bad if you just output from your IGP or another card.
And yes, stuff rented from vast.ai or whatever is cheap. So are APIs. TBH thats probably the way to go if budget is a big concern, and a 24GB B60 is out of the cards.