this post was submitted on 02 Apr 2025
47 points (98.0% liked)

Programming

19818 readers
335 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
 

I've read multiple times that CUDA dominates, mostly because NVIDIA dominates. Rocm is the AMD equivalent, but OpenCL also exists. From my understanding, these are technologies used to program graphics cards - always thought that shaders were used for that.

There is a huge gap in my knowledge and understanding about this, so I'd appreciate somebody laying this out for me. I could ask an LLM and be misguided, but I'd rather not 🤣

Anti Commercial-AI license

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 3 weeks ago (6 children)

AFIK it’s only NVIDIA that allows containers shared access to a GPU on the host.

With the majority of code being deployed in containers, you end up locked into the NVIDIA ecosystem even if you use OpenCL. So I guess people just use CUDA since they are limited by the container requirement anyways.

That’s from my experience using OpenGL headless. If I’m wrong please correct me; I’d prefer being GPU agnostic.

[–] [email protected] 0 points 3 weeks ago (2 children)

Check implementations before saying shit like that. Nvidia has historical bad open source driver support, which makes it hard for people to implement vGPU usage. They actually actively blocked us from using their cards remotely, until COVID hit. Then they gave out the code to do it. They are still limiting customer level cards usage on virtualization cases. They had to give out a toolkit for us to be able to use their cards on docker. Other cards can be accessed just by sharing dev driver files to the volume.

[–] [email protected] 2 points 3 weeks ago (1 children)

Can you share sample code I can try or documentation I can follow of using an AMD GPU in that way (shared, virtualized, using only open source drivers)?

[–] [email protected] 2 points 3 weeks ago

Check Wolf (in my other comment), it's the best example of GPU virtualization usage.

Otherwise you can check other docker images using GPU for computing, like jellyfin for instance, or nextcloud recognize, nextcloud memories and its transcoding instance,...

[–] [email protected] 1 points 3 weeks ago

Check Wolf implementation for context. It's a mess with Nvidia.

https://games-on-whales.github.io/wolf/stable/user/quickstart.html

load more comments (3 replies)