Well it is a 9B model after all. Self hosted models become a minimum "intelligent" at 16B parameters. For context the models ran in Google servers are close to 300B parameters models
Saterz
Looks good! Was the paper like this or did you color it?
Yeah, you're right on this one.
That's it, I'm stopping here, if people prefer commenting on the words I use, instead of on what I'm saying, this debate will not go anywhere.
Actually I recently discovered Lemmy while making research on the Fediverse. And your accusation of me being a fascist means that you didn't understand what I wrote. I despise them just as much as you if you want to know everything.
And why would I even wait 4 hours from the creation of my account to my first comment to start posting my "fascist-appeasing nonsense"? If what you were saying was true my account would have the age of my first comment (which isn't even a "fascist-appeasing nonsense" one)
If this is the way you feel, then so be it... I would've tried anyway. Good luck to y'all in the USA.
When did I ever support them? Did you even understand what I wrote? I said that celebrating the death of human beings isn't right, that's all.
Here:
https://www.sitepoint.com/local-llms-complete-guide/
https://www.hardware-corner.net/running-llms-locally-introduction/
https://travis.media/blog/ai-model-parameters-explained/
https://claude.ai/public/artifacts/0ecdfb83-807b-4481-8456-8605d48a356c
https://labelyourdata.com/articles/llm-fine-tuning/llm-model-size
https://medium.com/@prashantramnyc/understanding-parameters-context-size-tokens-temperature-shots-cot-prompts-gsm8k-mmlu-4bafa9566652
To find them it only required a web search using the query local llm parameters and number of params of cloud models on DuckDuckGo.
Edit: formatting