this post was submitted on 06 Mar 2025
26 points (96.4% liked)

Linux

9948 readers
147 users here now

Welcome to c/linux!

Welcome to our thriving Linux community! Whether you're a seasoned Linux enthusiast or just starting your journey, we're excited to have you here. Explore, learn, and collaborate with like-minded individuals who share a passion for open-source software and the endless possibilities it offers. Together, let's dive into the world of Linux and embrace the power of freedom, customization, and innovation. Enjoy your stay and feel free to join the vibrant discussions that await you!

Rules:

  1. Stay on topic: Posts and discussions should be related to Linux, open source software, and related technologies.

  2. Be respectful: Treat fellow community members with respect and courtesy.

  3. Quality over quantity: Share informative and thought-provoking content.

  4. No spam or self-promotion: Avoid excessive self-promotion or spamming.

  5. No NSFW adult content

  6. Follow general lemmy guidelines.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 month ago (2 children)

The problem is... How do we run it if rocm is still a mess for most of their gpus? Cpu time?

[–] [email protected] 1 points 6 days ago

There are ROCm versions of llama.cpp, ollama, and kobold.cpp that work well, although they'll have to add support for this model before they could run it.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

Is it still a mess? I thought it was reasonably well supported on Linux with GPUs from the past few years.