this post was submitted on 15 Feb 2025
10 points (100.0% liked)
LocalLLaMA
2841 readers
1 users here now
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The programs usually mmap the file into memory. That means that parts of it are loaded as used and unloaded if there is no memory left. That's why it does not say it is using memory. Check disc i/o as it is generating the message. For linux that can be seen in htop or iotop, for win idk.
Note that I use lmstudio, that uses llama.cpp to run models. Gpt4all, I think, uses a modified version of same. Doesn't matter they should all be using mmap to load the file.
PS Depending on the model I also get a couple tokens per sec on the cpu.
Edit: Didn't see someone already said the same, I'l leave this here anyway.